Re: Si definition of Friendliess

From: James Higgins (jameshiggins@earthlink.net)
Date: Fri Apr 06 2001 - 09:30:48 MDT


At 03:24 AM 4/6/2001 -0400, Eliezer S. Yudkowsky wrote:
>James Higgins wrote:
> > As long as the Friendly AI people want to keep making it sound like
> > everything is going to go like clockwork and be perfect, I'm going to
> > continue to point out the opposite viewpoint.
>
>Now, in point of fact, I think there's a quite significant probability
>that everything will "go like clockwork and be perfect" - that the whole
>issue will be resolved cleanly, safely, and without once ever coming close
>to the boundary of the first set of safety margins.

You must be the greatest programmer in the universe then. Every piece of
software has bugs, no matter how good the programmer, how many code reviews
and how much testing. Software is simultaneously incredibly complex and
fragile. For your plan to go perfectly you would have to be 100% bug free,
thought of every significant possibility and correctly programmed for
them. The slightest error in architecture, implementation or knowledge
could veer this thing in a totally different direction.

This has got to be the most complex piece of software I've ever
imagined. If not from a code point of view (you may be able to simplify
much of the code since this is a "seed") then at least architecturally. To
say that your group is capable of creating the most complex and ambitious
software project ever, without making any mistakes is the most arrogant
thing I've ever heard. Bar none.

I imagine your a great programmer & architect. But I don't believe it is
possible for any human development team to get something like this
perfect. Espicially when you try and rush it. And since their is a race
on to get their first, it will be rushed in the end.

Considering all these factors, please explain why you think their is a
"significant probability
that everything will 'go like clockwork and be perfect'".

Just so you know, I am an incredibly good software architect and
programmer. But I doubt I would have much chance of getting this "perfect"
the first.

> > We are playing with incredibly dangerous technology here. Not once have I
> > seen the powers that be on this list stop and ask "should we do this?"
>
>I have to echo Brian on this. The point of doubts is that they lead to
>questioning, and thence to ANSWERS. And we didn't start doing this
>yesterday. All known doubts have been taken into account and resolved
>into our current course of action, so we are unlikely to engage in
>spontaneous self-questioning unless there's a new fact, experience, or
>realization to act as a trigger factor. I'm sorry if this makes us look
>overconfident, but what are we supposed to do? Pretend to engage in
>spontaneous self-questioning for the PR benefit?

See my previous point in this message. Plus, I repeat what I replied to
Brian, if you have thought of all this please educate me. If you don't
spend the time to write a paper specifically covering why we (the human
race) should pursue the singularity and why your approach is the best, your
going to get asked hundreds of thousands of times over the next decade.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT