Uploads and AIs (was: Deliver us from...)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Apr 06 2001 - 17:45:18 MDT


Brian Atkins wrote:
>
> Power is power... The ability to do the things a superintelligence can do
> is way way more power than anyone on Earth has ever possessed. The human
> may be able to upgrade him/herself carefully enough to posthumanity/
> superintelligence such that they never are really tempted by it. Or they
> might not. The simple fact that they will be the first person trying to
> upgrade their mind (an evolved, and probably still undeciphered organ if
> uploading becomes available somehow before AI) by trying various hacks
> is dangerous in and of itself. An AI that understands its own source code
> would seem inherently safer.

For me, the superiority of AI over uploading lies chiefly in two facts:
First, uploading is a technology years ahead of AI *or* military
nanotechnology. If you're postulating that uploading was naturally
developed before both of the "alternatives", I want to know how. If
you're trusting a transhuman AI, a sort of limited Transition Guide, to
upload the first humans and leave it to them from there, I want to know
why this path doesn't subsume almost all the risk of straightforward seed
AI development.

That said, if I had both an uploading device and a seed AI in front of me
- *which is not the case* - which one I'd choose would depend on how good
the AI was. If ve'd been run through a few rounds of wisdom tournaments
(see _FAI_), and just looked better than human at handling both
philosophical crises and self-modification, I'd go with the AI, of
course. Ve'd be starting out with a much higher level of ability and
morality.

If I had to pick between Hugo de Garis's AI design and Brian, I'd pick
Brian in an instant. Heck, I'd pick Christian L. over a non-Friendly AI,
because Christian L. has built-in causal rewrite semantics and the NFAI
doesn't.

As I currently see it, it only takes a finite amount of effort to create a
threshold level of Friendliness - and more importantly, structural
Friendliness - beyond which you can be pretty sure that the AI has the
same moral structure as a human; or rather, a moral structure which can
handle anything a human can. Then the human's inexperience at
self-modification, and emotional problems, become disadvantages.

However, it seems to me nearly certain that the potential for a hard
takeoff - supersaturated computing power - will exist years before
uploading becomes possible. Thus, the question is simply one of Friendly
AI, unFriendly AI, or someone blowing up the world.

> > > You want to talk about who designs the first AI? Well who decides who gets
> > > to be the first upload?
> >
> > Actually, no, I don't want to talk about either one.
>
> Well you kind of have to if you are going to champion the uploading path
> to Singularity...

I agree with Brian. Anyone who wants to talk about uploading has to
explain who goes first, how they're selected, how many people go first,
where - computationally - they live, how they vote, and what happens in
the case of a rogue upload, or if I, as an upload, want to write a seed
AI. If it's a single human doing self-enhancement, I want to know how the
changes are tested out (at least, until some decent level of transhumanity
is reached).

I'd probably go with one human, three at the most. I'd be mostly
concerned about finding a human who was (a) willing to hold off on the
emotional modifications and concentrate on just increasing intelligence
for a while, and (b) finding someone who, at least overtly and explicitly
and as a surface-level decision, thinks that rationalization and
irrationality and non-normative cognition is a bad thing. I doubt that
Christian L. *believes himself* to tolerate irrationality, and that is
perhaps the single most important quality to start out with.

It might work. I just think it would be an unnecessary risk if you have a
structurally complete AI standing in front of you, and *definitely* an
inferior risk if you have a wisdom-tournamented, philosophically
transhuman AI on hand.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT