From: Samantha Atkins (email@example.com)
Date: Fri Apr 06 2001 - 01:40:10 MDT
Brian Atkins wrote:
> Samantha Atkins wrote:
> > "Eliezer S. Yudkowsky" wrote:
> > >
> > > I'm also more inclined to trust an AI more than a person... maybe even
> > > more than I'd trust myself, since I'm not designed for recursive
> > > self-improvement. Actually, I should amend that: After I've been around
> > > an AI and had a chance to chat with ver, then I expect to wind up
> > > justifiably trusting that AI's seed morality around as much as I'd trust a
> > > human seed morality, and I can also foresee the possibility of standing in
> > > the AI's presence and just being overawed by vis seed morality. Either
> > > way, I also expect to wind up trusting that AI's transcendence protocol to
> > > preserve morality significantly more than I'd trust a human-based
> > > transcendence protocol to preserve morality.
> > >
> > I don't see how this follows. If you upload a human who quickly
> > self-improves ver capabilities and becomes an SI and if
> > super-intelligence brings with it expanded moral/ethical understanding
> > then I see no reason this combination is less trustworthy than starting
> > from scratch and only putting in what you believe should be there in the
> > beginning. Yes a lot of evolved complicated behavior and conditioning
> > is not present in the AI. But some of that complicated behavior and
> > conditioning is also the bed of universal compassion and utter
> > Friendliness.
> Lot of ifs there...
> What it seems to come down to is you are either relying on objective
> morality (in which an AI should do better since it has less evolved
> crap to deal with), or a natural convergence to the "Friendly zone". In
> which case we also argue that a properly designed AI should easily
> outperform a human attempting to upgrade him/herself. The reason I think
> is easy to see: you can't really predict in advance which particular
> human will become utterly Friendly vs. which particular human will become
> the next Hitler when presented with the total power uploading/becoming a
> SI would give them. History has shown a tendency for power to corrupt
> humans. At least with an AI we can sharply reduce the risks by a) designing
> it right b) testing testing testing
I think you may have accidentally palmed a card there. If you assume
morality is objective then increasing intelligence will tend toward it
whether that intelligence is posthuman or AI. Similarly, if the
Friendly Zone is what things naturally converge to with higher
intelligence then again I don't see that it would matter. History is
irrelevant. Total power for what? By definition the posthuman is no
longer human. Power over the relative equivalent mentally of ants? Who
would care for that?
I don't agree the AI reduces the risk. You have to put in selectively
much of the good that humans have as well as avoiding the ill.
> You want to talk about who designs the first AI? Well who decides who gets
> to be the first upload?
Actually, no, I don't want to talk about either one.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT