From: Randall Randall (randall@randallsquared.com)
Date: Thu Oct 08 2009 - 10:12:02 MDT
On Thu, Oct 08, 2009 at 08:07:33AM -0700, John K Clark wrote:
> On Wed, 07 Oct 2009 13:32:59 -0500, "Pavitra"
> <celestialcognition@gmail.com> said:
> > I would expect a given intelligence to have a
> > sense of absurdity if and only if it was evolved/designed to detect
> > attempts to deceive it.
>
> And of course the AI IS being lied to, told that human decisions are
> wiser than its own; and a AI that has the ability to detect this
> deception will develop much much faster than one who does not.
While I agree that an AGI will undoubtedly be lied to about something,
I don't think those in the Friendliness camp are suggesting that it
be lied to, or told that human decisions are wiser than its own.
Rather, they're suggesting that there can and should be a highest-level
goal, and that goal should be chosen by AI designers to maximize human
safety and/or happiness. It's unclear whether this is possible, but if
it is possible, and if the AI's goal system is structured this way,
then *someone* will have to choose a highest-level goal, and since the
AI won't want to change it (by definition, since it's the highest
possible goal), it'll be stable except via accident or outside changes.
Now, it's entirely arguable whether such a goal system is possible to
build (as a stable system, etc), but it doesn't make any sense to
accept the Friendliness camp's assumption that it is possible and
then argue that the AI will magically discard the goal because it's
so much more intelligent. A highest level goal guides intelligence;
it isn't subject to argument or examination.
I think one of the reasons this is difficult is that humans do not
appear to have a goal system which is structured in this way, so we
can examine and object to *any* goal we have, and are thus much less
reliable than any entity with such a goal system.
-- Randall
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT