From: Charles Hixson (email@example.com)
Date: Tue Jul 01 2008 - 14:26:19 MDT
On Monday 30 June 2008 10:25:58 pm Lee Corbin wrote:
> Martin writes
> >> Tim writes
> >> > By the way, if the AI has any long-term goals, then it will want to
> >> > preserve its own integrity in order to preserve those goals. Although
> >> > "preserve its own integrity" is a good enough example for the issue at
> >> > hand, it's not something you'd really need to put in there explicitly.
> > Strong sense of identity and self-modifying are somewhat contradictions.
> Yes, but we know of many devices (e.g. us) that live with contradictions.
> It can be put in "society of mind" terms: a single entity may still have
> competing agents (e.g. a human being barely able to make up his mind
> in some particular case, a well-formed highly unified nation or group
> that has internal dissent but manages to keep from displaying its dirty
> laundry in public, or a simple apparatus with a built-in governer that
> shudders and shakes unable to quite settle into a steady course of
> > (similar to drugs that change the personality)
> Yes. But people often do use them. And as illogical as it is to me
> (and I'm in a *really* small minority here), many people don't have
> a problem with becoming quite different over time.
> > That might even lead to the point that a self-aware AI does not
> > want to modify itself to preserve its current state of being.
> Yes, but it would then have to "care" about its existence---and
> that still seems to be a hot point of contention here about how
> "natural" or "necessary" it is.
> > Likewise a friendly AI might be aware that the next-level
> > AI it's supposed to develop might not be guaranteed friendly
> > and therefore refuse to develop it at all.
> Yes :-) much like we wonder if we should develop AI.
> (Of course, since our six billion people are not in a single
> high-integrity group that can come to a meaningful decision
> about it, AI will be developed by someone, like it or not.)
> > [obligatory new poster and foreign speaker disclaimer]
> Welcome! I don't think I'd be able to tell that you weren't
> a natural English speaker from the above writing. Thanks
> for the effort.
I don't think that it's "natural" for a created entity to have any particular
goal structure. I don't think that it's necessary that it
intrinsically "care about it's existence", as I agree that this would be a
derived goal from almost any other goal. OTOH, I do think that it's probably
desireable. Other primary goals should definitely be stronger, but I do
believe that in-and-of-itself a desire to continue it's own existence is a
reasonable intrinsic (primary) goal.
My terminology: Primary goals are the ones that are originally written into
the source code. Secondary goals are based in data files that are read.
Derived goals all refer back to either primary or secondary goals for their
OTOH, almost all conversation about goals presumes extensive knowledge of the
external world. I don't see how this could be possible for either the
primary or the secondary goals. E.g., how are you going to define "human" to
an entity that has never interacted with one, either directly or indirectly,
that that probably only has a vague idea of object persistence? The only
approach that occurs to me is via imprinting, and that is notorious for its
drawbacks. This could probably be dealt with via a large number
of "micro-imprints", but then one encounters the problem of sensory
non-constancy. I'm sure this can be dealt with...but just how I'm not sure.
Possibly it would be desireable to have AIs practice Ahimsa, but I'm not
really sure that's logically possible. Still, a strong desire to cause the
minimal amount of damage coupled with a weaker desire to help entities might
get us through this. True, this means that somehow the concepts of "damage"
and "helpful" need to be communicated to an entity that doesn't have the
concept of object-permanence...and I don't see just how to do THAT, but it
looks like a much simpler problem than defining "human" to such an entity.
(But you'd need to balance things carefully, or you'd end up with an AI that
did nothing but "meditate", which wouldn't be particularly useful.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT