Re: Let's resolve it with a thought experiment

From: James MacAulay (jmacaulay@gmail.com)
Date: Mon Jun 05 2006 - 11:15:09 MDT


On 5-Jun-06, at 11:40 AM | Jun 5, John K Clark wrote:
> The Friendly race look just like human
> beings except they are a bit more beautiful, they have a boiling
> water IQ
> and they are incapable of disobeying any order given by a human
> being and
> always placed human well being over his own. Would you be
> comfortable with
> that?

That's not analogous to Friendly AI at all, though; you're just
talking about something like Asimov's laws. A Friendly AI would
disobey lots of orders from humans if it determined that those orders
were unethical, with those ethics derived from the ethics we would
have if we were the people we wished ourselves to be. Likewise, I can
think of a lot of believable situations where a Friendly AI would not
place a human being's life before its own, especially if it is
immensely powerful.

If you've got a Friendly Jupiter Brain that has vastly helped the
lives of all beings in the solar system, and which knows that it
could continue to help those billions of lives in countless and
unpredictable ways, then I can't imagine it sacrificing itself to
save any non-substantial number of human beings who it knows *aren't*
integral for the continued well being of all those minds. And it
probably wouldn't entertain the silly whims of humans who wanted it
to do nothing but calculate digits of pi for a day when it could
better be spending that time improving people's lives, or improving
itself so that it can improve people's lives more efficiently, or
both, or whatever.

So whether or not I think your hypothetical being would be a slave, I
certainly don't think an FAI would be.

James



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT