Re: Volitional Morality and Action Judgement

From: Mark Waser (mwaser@cox.net)
Date: Sat May 29 2004 - 19:17:30 MDT


> > 1. Why do you believe that a single FAI is the best strategy?
> a) It is simpler to create.
> b) Having one being around with the capability of destroying humanity is
> less risk than having more than one, in the same way as having one human
> being with a Pocket Planetary Destruct (TM) device is less risky than
having
> more than one.

Ah. I wasn't clear. What I was envisioning is a set-up with multiple FAIs
where none of them are permitted to take an action unless all agree. Having
three human beings with nuclear keys (all of which are required to fire the
missiles) is less risk than having two or one.

> > To me, it's a single point of failure or like a
> > mono-culture of crops in agriculture. One mistake
> > or one disease and POOF! smiley faces everywhere.
> This is a false analogy. With crops there are many species and varieties
> that will approach the desired result more or less closely. Choosing a
> variety that doesn't produce well doesn't end humanity. With an RSI AI
the
> outcome becomes binary: some sort of continued life, or annihilation.
This
> is not a false dichotomy; life/death is about as binary as you can get.

Choosing a variety that doesn't produce well and relying only on that
variety will result in starvation. It's not that bad an analogy . . . .
:-)

> > Why do you think that NASA uses multiply redundant
> > systems?
> The multiply redundant systems for NASA's launch vehicles were created
> because having them reduces the overall risk of failure. This is not true
> of RSI AI. An RSI AI is analogous to an entire launch vehicle that might
> kill you. If launching the first one doesn't kill you, then you might try
> again, but otherwise you're dead.

I think that my clarification above addresses this point.

> > 2. Why do you believe that relying on Eliezer and
> > only Eliezer is a good strategy.
> It is a lousy idea. I don't believe MW ever said it was a good idea.

I may have misinterpreted him but that was my understanding of what he said.

> > [snip] I do expect serious engagement with the most
> > seriously engaged other participants.
> It *is* always enjoyable when that happens, and often informative, but
> perhaps you shouldn't always expect it. Sometimes people simply disagree
> about ideas, we *are* all running on slightly different brainware and
> different knowledge bases. Once an idea has been beaten to death, with
> little progress on either side, then it is sometimes worthwhile to
> disengage.

I don't *always* expect it. The problem is that it seems to be very much
the exception rather than the general occurance.

> > Eliezer has devolved to "everything is too dangerous"
> > but "I'm much too busy to discuss it or even write it
> > up for anybody" and I think that is a REALLY BAD
> > THING(tm).
> I suspect this is a REALLY BAD THING(tm) to you because you may be relying
> on Eliezer. My advice is: don't. One of SIAI's stated goals is to grow
its
> programmer team, and with that team improve and develop FAI. Either
> donating yourself, or persuading others to donate would help tremendously
> toward that goal. And part of that goal is: "making it so that Eliezer is
> neither considered to be nor in fact a failure point or bottleneck."

I'm not relying on Eliezer. I do think that it's unfortunate that others
apparently are. And, as I've said, I have volunteered to donate myself.

> In the Ben vs. Eliezer debates each party has a set of cognitive models
they
> are using to reason about the ideas. Ben's model projects outcomes along
> one trajectory, Eliezer's along another. The models are complex, and
would
> not be easy to communicate using human language, even if both parties had
> perfect introspection, which they do not. The parties may never agree
until
> one of them builds a working AI and points at it saying: "There. That's
what
> I'm talking about."

Yes, and I've had numerous debates with Ben too . . . . but they don't
generally end with "I'm too busy to clarify my thoughts to explain it to
you". <italics>My opinion</italics> is that Eliezer's work would progress a
lot faster if he could collaborate with others and his friends/supporters
should be trying to convince him to do so. Watching Eliezer refuse to give
the time of day to those individuals who appear most likely to be most
capable of assisting him (or developing FAI on their own) is most
frustrating.

        Mark



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT