Re: Si definition of Friendliess

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Apr 06 2001 - 11:37:30 MDT


James Higgins wrote:
>
> At 03:24 AM 4/6/2001 -0400, Eliezer S. Yudkowsky wrote:
> >James Higgins wrote:
> > > As long as the Friendly AI people want to keep making it sound like
> > > everything is going to go like clockwork and be perfect, I'm going to
> > > continue to point out the opposite viewpoint.
> >
> >Now, in point of fact, I think there's a quite significant probability
> >that everything will "go like clockwork and be perfect" - that the whole
> >issue will be resolved cleanly, safely, and without once ever coming close
> >to the boundary of the first set of safety margins.
>
> You must be the greatest programmer in the universe then. Every piece of
> software has bugs, no matter how good the programmer, how many code reviews
> and how much testing.
> For your plan to go perfectly you would have to be 100% bug free,
> thought of every significant possibility and correctly programmed for
> them.

You were the one who used the word "perfect"; I, in turn, defined that to
mean "without coming close to the boundary of the first set of safety
margins".

Does that sound like I'm assuming that all the code works the first time?
Like I said, the reason "Friendly AI" is so long...

Note the phrase "safety margins". Note the phrase "first set of".

> The slightest error in architecture, implementation or knowledge
> could veer this thing in a totally different direction.

If you're in a situation like that, you've *already* screwed up. You
screwed up for the first time when you proposed an architecture that was
that sensitive to errors, you screwed up for the second time when you
decided to implement it anyway, and you screwed up for the third time when
you failed to retreat and regroup after seeing how fragile it was.

> Software is simultaneously incredibly complex and
> fragile.

Only the human kind, and it's only the human kind of software while the AI
is young and infrahuman and non-self-improving.

> See my previous point in this message. Plus, I repeat what I replied to
> Brian, if you have thought of all this please educate me. If you don't
> spend the time to write a paper specifically covering why we (the human
> race) should pursue the singularity and why your approach is the best, your
> going to get asked hundreds of thousands of times over the next decade.

Probably the closest thing is:
  http://sysopmind.com/sing/PtS/navigation/contents.html
Of course, the whole of PtS is now just a tad obsolete, but it should
serve as a first approximation.

In any case, what I object to is *not* your asking the question, but
rather chiding us for having never thought about it. If you have a
question, then just *ask* it and I'll try to direct you to the appropriate
webpages, quod erat demonstrandum.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT