Re: Deliver Us from Evil...?

From: Brian Atkins (
Date: Thu Apr 05 2001 - 11:39:10 MDT

Samantha Atkins wrote:
> "Eliezer S. Yudkowsky" wrote:
> >
> > I'm also more inclined to trust an AI more than a person... maybe even
> > more than I'd trust myself, since I'm not designed for recursive
> > self-improvement. Actually, I should amend that: After I've been around
> > an AI and had a chance to chat with ver, then I expect to wind up
> > justifiably trusting that AI's seed morality around as much as I'd trust a
> > human seed morality, and I can also foresee the possibility of standing in
> > the AI's presence and just being overawed by vis seed morality. Either
> > way, I also expect to wind up trusting that AI's transcendence protocol to
> > preserve morality significantly more than I'd trust a human-based
> > transcendence protocol to preserve morality.
> >
> I don't see how this follows. If you upload a human who quickly
> self-improves ver capabilities and becomes an SI and if
> super-intelligence brings with it expanded moral/ethical understanding
> then I see no reason this combination is less trustworthy than starting
> from scratch and only putting in what you believe should be there in the
> beginning. Yes a lot of evolved complicated behavior and conditioning
> is not present in the AI. But some of that complicated behavior and
> conditioning is also the bed of universal compassion and utter
> Friendliness.

Lot of ifs there...

What it seems to come down to is you are either relying on objective
morality (in which an AI should do better since it has less evolved
crap to deal with), or a natural convergence to the "Friendly zone". In
which case we also argue that a properly designed AI should easily
outperform a human attempting to upgrade him/herself. The reason I think
is easy to see: you can't really predict in advance which particular
human will become utterly Friendly vs. which particular human will become
the next Hitler when presented with the total power uploading/becoming a
SI would give them. History has shown a tendency for power to corrupt
humans. At least with an AI we can sharply reduce the risks by a) designing
it right b) testing testing testing

You want to talk about who designs the first AI? Well who decides who gets
to be the first upload?

Brian Atkins
Director, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT