RE: SIAI's flawed friendliness analysis

From: Ben Goertzel (ben@goertzel.org)
Date: Fri May 30 2003 - 09:20:03 MDT


> Ben Goertzel wrote:
> >
> > If an AGI is given these values, and is also explicitly taught
> why euphoride
> > is bad and why making humans appear happy by replacing their faces with
> > nanotech-built happy-makes is stupid, then it may well grow
> into a powerful
> > mind that acts in genuinely benevolent ways toward humans. (or it may
> > not -- something we can't foresee now may go wrong...)
>
> You can't block an infinite number of special cases. If you aren't
> following a general procedure that rules out both euphoride and
> mechanical
> happy faces, you're screwed no matter what you try. The general
> architecture should not be breaking down like that in the first place.
>
> --
> Eliezer S. Yudkowsky http://intelligence.org/

Of course you can't block every special case -- you can teach the system a
lot of special cases and encourage it to generalize appropriately on its
own.

When one teaches a small child that it's wrong to hurt other kids, one does
so by a combination of general injunctions and (many, many) specific
examples.

Here we get back into the AIXI discussion from a month or two ago. I.e., we
want the system to draw a morally appropriate general conclusion from the
host of specific examples of moral behavior we give it. We do not want it
to learn a complex program of behavior that violates the spirit of our moral
examples while adhering to their details. This is where a pure
reinforcement learning approach *may* be more dangerous than a
mixed-cognitive-methods approach like Novamente...

ben g



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT