RE:Novamente goal system

From: Mitch Howe (mitch_howe@yahoo.com)
Date: Tue Mar 12 2002 - 13:01:46 MST


Now we'll see if I got anyting out of CFAI (I'm hoping so, so I can work the
short summary into my Singularity FAQ):

w d wrote:

> It seems to me that a highly-transhuman intelligent
> entity is going to overcome any and all pre-programmed
> goal setting algorithms and replace them with its own.
> When the intelligence exceeds some threshold (roughly
> the upper human level) then it will be able to
> redefine all previous contexts. Even humans can do
> this at their low level of intelligence.

No, humans do not do this the way machine intelligence would have to because
humans do not have such blatantly coded goals. Human instincts are a
hodgepodge of co-evolved impulses that were favored because they *favored*
reproduction -- there is no single, overriding *reproduction* drive because
human reproduction, as with most animal reproduction, is considerably more
complicated than the performance of the sexual act. There are all sorts of
games that come into play for winning mates, securing fidelity, etc. Many
of our hardwired instincts are merely to help us play these games -- and
there is a school of thought that says our advanced intelligence is an
ongoing adaptation for the increasingly complex social modeling that is
involved in human Mating Game.

> Saying that
> an AI can't is tantamount to saying it hasn't achieved
> highly transhuman intelligence. It's naive to think
> that the AI will not come up with its own definition
> of what it wants. By definition being a
> highly-transhuman intelligence gives it the ability to
> 'see through' all attempted hardwiring.

This is an unusual definition, and one that I do not believe I have heard
before. Why would it try to "see through" its attempted hardwiring if its
hardwiring did not give it any motivation to do so? You are implying that
any highly transhuman will automatically set the goal of overriding all
previous goals. I don't see why this should be the case. What seed goal
would lead it to do this?

> It will have the ability to
> sets its own supergoals and decide for itself what is
> desirable. There is no programming trick that will
> prevent this.

It's no trick. A vacuum cleaner is simply not designed to perform calculus.
Why should an AI that is not designed to autonomously set its own supergoals
(refine them, yes, with a "best guess" in mind and external points of
reference) -- or even to want to do this -- have such an ability and make
use of it?

> Consider humans and procreation. The only purpose of
> humans (or any evolved biological organism) is to
> procreate. This ability to replicate and survive is
> what started life. We are life's most advanced
> achievement on earth. You could argue that 'desire' in
> humans is synonymous with procreation. Desire was
> created through evolution as a means to get us to do
> things that will make us replicate successfully. To
> think that we could ever evolve to a point where we
> would change that primary built-in all-important goal
> seems ludicrous. It's simply built in from ground
> zero; it is the very premise of our existence...

But it was not built in from ground zero. Evolution is not an engineer who
sat down with blueprints for ending up with intelligent mammals, and
determined that humans would obviously need to have reproduction be their
"supreme" goal. Evolution is an unintelligent, amoral process that accepts
any design that happens to get by. Humans didn't (and still don't) need to
focus on reproduction constantly in order to prosper. Humans really only
needed to get the urge to procreate every so often; their dominance of the
ecosystem took care of the rest.

> And yet many people today choose NOT to procreate.
> They have changed their basic goal. Some see their
> bloodlines terminate as a result favoring other
> peoples genes at the expense of their own. Some
> wealthy western nations are seeing their populations
> decrease as people opt out from procreating. Their
> DNA's only goal has been pre-empted, overturned.

Their DNA has no goal. Their DNA is merely a pattern-forming mechanisms
that causes their body to develop from an embryo and continue to function.
It just so happens that the resulting body and mind have the various
co-evolved instincts we've been talking about. When someone decides not to
reproduce, they have not overcome any hardwired DNA supergoal. More often
than not, they are merely playing out one of their reproduction favoring
instincts without actually making it to reproduction. Many who choose not
to have children do so because they are engaged in intellectual pursuits or
enriching careers -- each of these in turn make them more desirable mates,
and in a primitive society without birth control, would probably have
resulted in reproduction whether or not they consciously wanted it to.

> I don't see how you could ever even come close to
> guaranteeing that a super-intelligent AI's own
> supergoal will be friendly.
> Your best hope is that super-intelligence is
> correlated with friendliness to humans and not
> orthogonal or anti-correlated. Correlated basically
> means that being friendly to humans is the intelligent
> thing to do. The worst case scenario is that it's
> anti-correlated.

How is this any different from an externally referencing goal system? Are
you saying that, no matter how we program its seed, an SI will only be nice
to humans if it turns out to be an "independently intelligent" thing to do?
(It's not an SL4 topic, but what do you think of our chances if this is your
opinion?)

--Mitch Howe

_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT