From: Brian Atkins (brian@posthuman.com)
Date: Fri Apr 06 2001 - 17:20:18 MDT
Samantha Atkins wrote:
>
> Brian Atkins wrote:
> >
> > Samantha Atkins wrote:
> > >
> > > "Eliezer S. Yudkowsky" wrote:
> > > >
> > > > I'm also more inclined to trust an AI more than a person... maybe even
> > > > more than I'd trust myself, since I'm not designed for recursive
> > > > self-improvement. Actually, I should amend that: After I've been around
> > > > an AI and had a chance to chat with ver, then I expect to wind up
> > > > justifiably trusting that AI's seed morality around as much as I'd trust a
> > > > human seed morality, and I can also foresee the possibility of standing in
> > > > the AI's presence and just being overawed by vis seed morality. Either
> > > > way, I also expect to wind up trusting that AI's transcendence protocol to
> > > > preserve morality significantly more than I'd trust a human-based
> > > > transcendence protocol to preserve morality.
> > > >
> > >
> > > I don't see how this follows. If you upload a human who quickly
> > > self-improves ver capabilities and becomes an SI and if
> > > super-intelligence brings with it expanded moral/ethical understanding
> > > then I see no reason this combination is less trustworthy than starting
> > > from scratch and only putting in what you believe should be there in the
> > > beginning. Yes a lot of evolved complicated behavior and conditioning
> > > is not present in the AI. But some of that complicated behavior and
> > > conditioning is also the bed of universal compassion and utter
> > > Friendliness.
> > >
> >
> > Lot of ifs there...
> >
> > What it seems to come down to is you are either relying on objective
> > morality (in which an AI should do better since it has less evolved
> > crap to deal with), or a natural convergence to the "Friendly zone". In
> > which case we also argue that a properly designed AI should easily
> > outperform a human attempting to upgrade him/herself. The reason I think
> > is easy to see: you can't really predict in advance which particular
> > human will become utterly Friendly vs. which particular human will become
> > the next Hitler when presented with the total power uploading/becoming a
> > SI would give them. History has shown a tendency for power to corrupt
> > humans. At least with an AI we can sharply reduce the risks by a) designing
> > it right b) testing testing testing
>
> I think you may have accidentally palmed a card there. If you assume
> morality is objective then increasing intelligence will tend toward it
As I said above objective morality is a possibility, but I don't assume it.
> whether that intelligence is posthuman or AI. Similarly, if the
Tend toward it yes, but which would tend toward it more quickly? A human
who has to fudge around with his evolved mind, or an AI that can pick
and choose more easily the mods it makes when it upgrades its mind? I
would argue (without much basis :-/) that the AI has a "clean slate" and
will be able to reach objective morality more easily/quickly. It may
seem like a small advantage, but still it is one.
Also, with the upload you've still got to worry whether or not that
particular personality will seek out the objective morality or not. It's
a guessing game as to whether or not you uploaded the right person. With
a properly designed and tested AI you can be almost 100% sure it will
seek the objective morality, or at least try to determine something
similar if it doesn't exist.
> Friendly Zone is what things naturally converge to with higher
> intelligence then again I don't see that it would matter. History is
Well that isn't for certain... what if only entities predisposed toward
Friendlyness converge there? If so then you have to very very carefully
pick the first human upload. The AI advantages in this situation are
clear I think. To sum up, whether objective morality or not the AI is
a better choice in terms of time required to reach utter Friendlyness
and risk.
> irrelevant. Total power for what? By definition the posthuman is no
> longer human. Power over the relative equivalent mentally of ants? Who
> would care for that?
>
> I don't agree the AI reduces the risk. You have to put in selectively
> much of the good that humans have as well as avoiding the ill.
Power is power... The ability to do the things a superintelligence can do
is way way more power than anyone on Earth has ever possessed. The human
may be able to upgrade him/herself carefully enough to posthumanity/
superintelligence such that they never are really tempted by it. Or they
might not. The simple fact that they will be the first person trying to
upgrade their mind (an evolved, and probably still undeciphered organ if
uploading becomes available somehow before AI) by trying various hacks
is dangerous in and of itself. An AI that understands its own source code
would seem inherently safer.
>
> >
> > You want to talk about who designs the first AI? Well who decides who gets
> > to be the first upload?
>
> Actually, no, I don't want to talk about either one.
>
Well you kind of have to if you are going to champion the uploading path
to Singularity...
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT