Re: [sl4] Friendly AI and Enterprise Resource Management

From: Samantha Atkins (sjatkins@gmail.com)
Date: Sun Oct 10 2010 - 01:07:22 MDT


On Oct 7, 2010, at 9:58 AM, John K Clark wrote:

>
> On Tue, 5 Oct 2010 "Mindaugas Indriunas" <inyuki@gmail.com> said:
>
>> It might be that one of the best ways to bring about the friendly AI is
>> by trying to be very rational about one's own actions, defining one's own
>> goal of life, and doing it in such a way that the resulting goal would be the
>> objective good
>
>
> In other words, you're going to try to convince the AI that "objective
> good" means being good to a human being not to an AI like itself, even
> though the AI is objectively superior to the human by any measure you
> care to name. Pushing the virtues of such a slave mentality (sorry, I
> believe the politically correct term is friendly) to a being much
> smarter than you are is going to be a very hard sell.
>

It seems you are starting with an AGI that already has a "mind of its own" or its own goal structure. Where did those come from? Where did it get the goal/value of only accepting goals that it had vetted as being "worthy"? Where did it get its notion of worthiness from? Did it just appear ex nihlio? We do get some chance to set the initial goal structure. That may or may not be a good thing. I am worried about Genie from the Magic Lamp effects. "You get just one wish that I will recursively get better and better at following utterly." *shudders*

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT