Re: When Subgoals Attack

From: Gordon Worley (redbird@rbisland.cx)
Date: Wed Dec 13 2000 - 12:45:17 MST


At 10:28 AM -0800 12/13/2000, Durant Schoon wrote:
>When Subgoals Attack
>--------------------

See this FOX special presentation at 8 tonight from the creators of
'When Nanobots Attack' and 'When Quantum Tunneling Attacks'. :-)

>Observation: In modern human minds, these subgoals are often not
> intelligent and do not constitute a sentience in and
> of themselves. Thirst->drink->pick up glass of milk->...

Off the top of my head, I can think of no basic goal of survival
which forces sentience. Humans could get along with out it, but
knowing that they are alive probably makes it easier to survive
better, since they care more about surviving if they know they could
be in another state (i.e. dead).

> So the problem is this: what would stop subgoals from
> overthrowing supergoals. How might this happen? The subgoal
> might determine that to satisfy the supergoal, a coup is
> just the thing. Furthermore, the subgoal determines that to
> successfully supplant the supergoal, the supergoal process
> must not know that "overthrow" has become part the
> subgoal's agenda. The subgoal might know or learn that its
> results will influence the supergoal. The subgoal might
> know of learn that it can influence other subgoals in
> secret, so a conspiracy may form. Maybe not a lot of the
> time, but maybe once every hundred billion years or so.

I'm trying to think of a subgoal that would want to overthrow a
supergoal, but am having a hard time. Something like getting a glass
of water overriding the goal of not killing other intelligences
because they are worth more alive than dead is very unlikely to
happen and only under *very* extreme circumstances. This does not
mean it can be discounted, since beings with transhuman and greater
intelligences are much more dangerous than the average human, but for
the time being this small problem can be overlooked.

Let's take a page from Ayn Rand's objectivism and suppose that the
supermost goal is selfishness. Then, somewhere deep in the hierarchy
of goals is an emotional urge for altruism, directly opposing the
supermost goal. Personally, I have heard some emotional stories that
swell the altruism's power, but the supermost goal has been powerful
enough to keep from getting displaced, even if I did temporarily get
altruistic and give a few dollars to a charity that wasn't going to
help me in any way. Now that I think about it, the metagoal is
survival, so altruism opposes the metagoal as well, since a
completely altruistic being would give up vis life to feed others or
do work for them or whatever. So, as long as the subgoal opposes the
metagoal, it cannot take over, and if it does, the intelligence has
just signed vis own death warrant and doesn't stay around long enough
to make other intelligences do the same thing.

> Many animals exhibit a kind of social hierarchy. Groups
> of weaker, well organized primates are known to
> overthrow the alpha male on occasion (I hope I'm getting
> this right, I don't have a reference). I'm wondering
> what precautions a superintelligence can take against
> this *ever* happening.

If it happens in my Rand example, then the intelligence is dead and
the problem is over. Taken in a different light, though, this is
like claiming that the citizens of a nation will die if the
government is overthrown. In North America, all 3 nations have
overthrown their government in one way or another (Canada to least
extent), yet all continue to exist. The subgoals of the societies
became more important than the supergoals of the state, so that state
fell and was replaced. The same would happen in your monkey example.
Persnonally, though, I would like it better if the mindset of people
changed and we had anarchy, but, from this discussion thus far, it
seems that goals are top down, not bottom up, so the order of power
is wrong to support an anarchy, and it would not be favorible to be
controled by low level goals like contract leg muscles instead of
high level ones like survive and be selfish.

> The follow up questions are: How stable are any of these
> situations? And can you ever really be 100% sure that an
> overthrow never happens?

An overthrow might not be bad, depending on what level it happens.
Overthowing the metagoal would be bad, as would a supergoal like
selfishness, but overthrowing a goal like pleasure in the interest of
getting work done would probably be good. Part of the complexity
arises from the hierarchy of goals, and being a person who prefers
anarchies, it is making my head hurt just to think about all these
relationships. Maybe there is nothing to overthrow, only goals which
fit the metagoal less well than others.

-- 
Gordon Worley
http://www.rbisland.cx/
mailto:redbird@rbisland.cx
PGP:  C462 FA84 B811 3501 9010  20D2 6EF3 77F7 BBD3 B003


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT