RE: Humane-ness

From: Chris Healey (chealey@unicom-inc.com)
Date: Tue Feb 17 2004 - 14:03:16 MST


Ben,

The major point of Friendly AI (CFAI) addressing this concern, is we
must conservatively assume that we're probably not smart enough to
properly define any of these ethical prescriptions much better than
we've been able to. If we can, that's great, and desireable on a
number of levels.

But maybe we won't, and quite possibly we'll be confused into thinking
we're right.

We'd better make damn sure that the AGI we end up with does not lack
the structural capability to represent those "highly complex and messy
networks of beliefs", and renormalize them. If we can't guarantee
that the AGI will remain humane under both state-of-ethics scenarios,
then we should continue to improve our design. The forementioned
messy belief network itself is not necessarily the goal, but its
structural accessibility to the AGI as a fundamental consideration is
a goal.

It may be that our current theories are lacking, but if they are
well-enough defined to delineate knowably unpredictable circumstances,
then there is no excuse for "being taken by surprise" by AGI behavior.
We may choose, most likely due to impending disaster, to explicitly
accept risks that our theory knowably exposes, but the conservative
and responsible path would be to rely upon only those mechanisms whose
structure is defined within the theory.

In other words, if we're doing something we can't fundamentally
explain, then we don't understand it. The things that we KNOW we
don't understand should either be examined further, or excluded from
consideration. The fact that we cannot know the risk imposed by the
the things we don't understand AND are unaware of entirely, would seem
to dictate that we minimize all known risk compulsively.

Perhaps I am off-base here, and it could be that Eliezer is rolling
his eyes reading this, but that is part of what I see as a core
concept behind CFAI.

-Chris Healey

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf
> Of Ben Goertzel
> Sent: Tuesday, February 17, 2004 1:43 PM
> To: sl4@sl4.org
> Subject: RE: Humane-ness
>
>
>
> And one more point
>
> I said in my essay that "Be nice to humans" or "Obey your
> human masters" are simply too concrete and low-level ethical
> prescriptions to be expected to survive the Transcension.
>
> However, I suggest that a highly complex and messy network of
> beliefs like Eliezer's "humane-ness" is insufficiently crisp,
> elegant and abstract to be expected to survive the Transcension.
>
> I still suspect that abstract principles like "Voluntary
> Joyous Growth" have a greater chance of survival. Initially
> these are grounded in human concepts and feelings -- in
> aspects of "humane-ness" -- as as the Transcension proceeds
> they will gain other, related groundings.
>
> -- Ben G
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT