From: James Higgins (jameshiggins@earthlink.net)
Date: Thu Aug 02 2001 - 15:48:08 MDT
At 02:03 PM 8/2/2001 -0400, you wrote:
>James Higgins wrote:
> >
> > When I first read "Staring Into the Singularity" I started thinking about
> > how much more, well just more/different, an SI would be than ourselves. As
> > it has been discussed in this room, most people believe that a human can't
> > even talk with an SI though a binary (light on/off) connection without
> > having them be controlled by the SI. Given such vast intellect,
> > capabilities and the freedom to fully alter its own code I don't believe
> > there is anything we can program into an AI that will ensure friendliness
> > when it gets to SI status. We're just not anywhere near smart enough to do
> > that. I really wish I didn't believe this (it would make me happier), but
> > this is what extensive thought on the matter leads me to believe.
> >
> > Based on this belief, the best course may be to hold off on launching an AI
> > that could progress to an SI until we have the ability to enhance our
> > intelligence significantly. Humans with much greater intelligence *may* be
> > able to alter/control a SI, but I believe that ultimately we cannot. But I
> > suspect that we will have Real AI and most likely SI before that comes to
> > pass, thus my belief that if SIs aren't inherently friendly we are probably
> > doomed.
> >
>
>One thing SIAI is trying to do is make something of a science out of
>Friendliness. It may be impossible, but we're trying. Here we have a
>large difference of opinion between us and James on what would be the
>optimum path to take due more or less to this one issue of Friendliness.
>But so far James seems to be going on mostly a "gut feel" that Friendly
>AI is not doable with a large degree of certainty. Do you have any specific
>criticisms of FAI James that we could try to discuss? I can tell from
>your other posts that your main concern is apparently a combo of "will it
>work long term" and "can we be 100% certain", right? It seems like your
>concern is addressed in the CFAI FAQ:
I am not an AI expert. Actually, I have no real training in AI at all. I
am a master software architect/engineer and fairly intelligent,
however. So I have read many of the Singularity related documents, thought
long and hard and participate in this list in order to learn and to provide
a slightly different perspective on things.
So I guess you could say I am going on "gut feel" to some extent, and also
applied reasoning and logic. To me, it is not logical to assume that we
can sufficiently influence an entity that will be many millions of times
more intelligent than us. This is like saying mice could influence humans
to be mouse-friendly. Yes, I realize that mice don't exactly equate or
have technology, but we will be much farther down on the intelligence scale
to an SI than a mouse is to us. So both my gut feel and reasoning suggest
that we can't do much to influence the SI. No disagreement about the fact
that we can create one, or that we should *try* to influence it, however.
I have also discussed this topic with a friend of mine who is very
intelligent and extremely knowledgeable about AI. He is working on (has
been for quite some time actually) a language specifically intended for AI
development. He has read many of the Singularity documents, but does not
participate on this list (to the best of my knowledge at least, he has
never posted). He had many good arguments on why we would almost certainly
fail to successfully implement friendliness into an SI! So I have thought
about and discussed this topic thoroughly.
>I have a hard time seeing how a human-level Gandhi-ish AI will suddenly run
>amok as it gets smarter, except due to some technical glitch (which is a
>separate issue we can talk about if you want).
If you were talking about our ability to create a friendly AI, we
agree. However, the AI will have to evolve many, many times in order to
become an SI. During any one of these evolutions it could, intentionally
or not, remove or hamper friendliness. Some of these could entail a
complete, from the ground up rewrite, using none of the original code and
only hand-picked logic/data. Friendliness, as a requirement, could easily
fall out during such a transition. It could decide that it would be better
off without some of the code/data that is part of friendliness. Further,
it could at some point ponder why it is supposed to be friendly at all. It
could decide that being friendly to humans is not a top priority, or that
how to be friendly should be completely different than what we envision.
We have a hard enough time making stable hardware/software (Windows 2000
crashed on me when I was originally writing this reply), so I frankly doubt
our ability to implement such a subtle concept in such a complex, self
evolving system.
That is not to say that I think SingInst, Eli or any other such individuals
or organizations are wasting time or effort. Friendliness and such
concepts are things that we must research. Even if we only nudge the SI,
just slightly, in that direction the effort is worthwhile. Any progress is
better than no progress. I'm just a realist, and I realistically don't
think we are adequately equipped, at present, to ensure a friendly SI. I
think intelligent enhancement, if it becomes available in time, would be a
major boon to your work.
>Also, can you address this quote from Q3.3 in the FAQ, since it relates
>to your suggestion the ideal path would be to wait:
>
>"Nothing in this world is perfectly safe. The question is how to minimize
> risk. As best as we can figure it, trying really hard to develop Friendly
> AI is safer than any alternate strategy, including not trying to develop
> Friendly AI, or waiting to develop Friendly AI, or trying to develop some
> other technology first. That's why the Singularity Institute exists."
That's the wonderful thing, we can have it both ways. I agree that you
shouldn't be waiting for anything and should be working on friendliness
now. You don't have to, the work that will eventually lead to intelligence
enhancement is going on in parallel. If, however, we get to the point that
we have both the hardware & software to launch an SI, but have not
progressed massively on the general concept of friendliness THEN I think it
may be prudent to wait. So I'm advocating delays later, rather than
sooner, if it is necessary.
James Higgins
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT