Re: [sl4] Long-term goals

From: Lee Corbin (lcorbin@rawbw.com)
Date: Thu Jul 03 2008 - 21:49:18 MDT


Charles writes

> I don't think that it's "natural" for a created entity to have any particular
> goal structure.

I agree.

> derived goal from almost any other goal. OTOH, I do think
> that it's probably desirable.

You and Isaac Asimov both.

> Derived goals all refer back to either primary or secondary goals for their
> significance.
>
> OTOH, almost all conversation about goals presumes extensive knowledge of the
> external world. I don't see how this could be possible for either the
> primary or the secondary goals. E.g., how are you going to define "human" to
> an entity that has never interacted with one, either directly or indirectly,
> that that probably only has a vague idea of object persistence?

The thing hardly qualifies as an AI if it doesn't have the ontology
of a three year old. And if an AI can understand what trees, cars,
tablespoons, and about twenty thousand other items are---that is,
can reliably classify them from either sight (pictures) or feel---then
it's going to know what a human being is, though (just as with
any of us) it will be undecided about some borderline cases.

Lee

> The only
> approach that occurs to me is via imprinting, and that is notorious for its
> drawbacks. This could probably be dealt with via a large number
> of "micro-imprints", but then one encounters the problem of sensory
> non-constancy. I'm sure this can be dealt with...but just how I'm not sure.
> Possibly it would be desirable to have AIs practice Ahimsa, but I'm not
> really sure that's logically possible. Still, a strong desire to cause the
> minimal amount of damage coupled with a weaker desire to help entities might
> get us through this. True, this means that somehow the concepts of "damage"
> and "helpful" need to be communicated to an entity that doesn't have the
> concept of object-permanence...and I don't see just how to do THAT, but it
> looks like a much simpler problem than defining "human" to such an entity.
> (But you'd need to balance things carefully, or you'd end up with an AI that
> did nothing but "meditate", which wouldn't be particularly useful.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT