Re: Safety of brain-like AGIs

From: Larry (entropy@farviolet.com)
Date: Wed Feb 28 2007 - 10:09:50 MST


On Wed, 28 Feb 2007, Shane Legg wrote:

> I don't know of any formal definition of friendliness, in which case, how
> could I possibly
> ensure that an AGI, which doesn't yet exist, has a formal property that
> isn't yet defined?
> That applies to all systems, brain-like or otherwise.
>
> If we consider informal definitions, then clearly some humans are friendly
> and intelligent.
> Thus at an informal level, I don't see any reason why a brain-like system
> cannot be both.

I see this as a very drastic problem. At best a human is friendly within
a fairly vague social context. Consider 200 years ago.

"He is a good man, he treats his slaves almost as if they where free men
  so long as they get the work done."

For the time that was about as good as it got. Except for those rare
people who could see beyond cultural norms. Still even then you sometimes
make things worse trying to make them better. I'd argue that a day in
the stocks for petty thefy is far 'friendlier' to all involved than
the supposedly more civilized month in jail where you get repeatedly
beaten and raped conviently out of view of 'civilized' people.

I'd say this isn't just a difficult problem its an intractable problem
defining friendlyness.

Back to the 200 year ago theme. The human race is in the position of a
slave being shipped off for auction whose discovered they have a choice
of the stage coach they board. That is about our level of knowledge of
the situation.

But we do have one other choice the slave didn't have, we can choose not
to go, or delay until we know more. I don't think AI can be stopped
forever, but I think the human race needs to seriously consider holding
off the 'singularity' and advancing toward it slowly. Some view it as a
utopia, but this is not the way world history has ever worked. Great
upheaval is usually very messy. Violently losing the human race entirely
is more likely than the utopian outcome. A much more slow approach may
allow evolutionary steps where there is time to grasp the next step at
each stage.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT