Re: When Subgoals Attack

From: Durant Schoon (durant@ilm.com)
Date: Wed Dec 13 2000 - 13:02:27 MST


> Durant Schoon wrote:
> >
> > Problem: A transhuman intelligence(*) will have a supergoal (or
> > supergoals) and might very likely find it practical to
> > issue sophisticated processes which solve subgoals.
> >
> > So the problem is this: what would stop subgoals from
> > overthrowing supergoals. How might this happen? The subgoal
> > might determine that to satisfy the supergoal, a coup is
> > just the thing.
>
> This is not the first time I've heard this possibility raised. My answer
> is twofold: First, I've never heard a good explanation of why an
> intelligent subprocess would decide to overthrow the superprocess.

Referring to the original "Revising a Friendly AI":

> Scenario 1:
> BG: Love thy mommy and daddy.
> [...]
> Scenario 2:
> [...]
> Scenario 3:
> [...]
> Scenario 4:
> [...]

Your examples show how things can go awry (due to misinterpretations).
Assuming you cannot effective *always* avoid misinterpretations between
supergoal and subgoal, room is left for problems. Especially, if the
supergoal is allowing the subgoal to determine its own subgoals.

And it looks like you mention that second:

> Second, I've never heard a good explanation of why a transhuman would
> decide to spawn intelligent subprocesses if it involved a major risk to
> cognitive integrity.

But isn't that the model?

(super)goal->(sub1)goal->(sub2)goal->(sub3)goal->...->(subN)goal
   <smart> <smart> <smart> <dumb> <dumb>

Where <smart> can be defined as determining it's own subgoals. Maybe
        when you get to <dumb>, the whole subprocess is known to be
        effective and is integrated as a reflex at that point.

For a sophisticated problem, MOST of these higher subgoals would need
a fair amount of intelligence to devise their own subgoals.

So what's the alternative? A transhuman that didn't follow this model
might look like this:

(super)goal->(sub1)goal->(sub2)goal->(sub3)goal->...->(subN)goal
   <smart> <dumb> <dumb> <dumb> <dumb>

Where the supergoal has worked out all the details first and the
subgoals are all blind executions. The problem is that this model is
inherently *serial*. The supergoal can only be working on one problem
at a time, because it can't trust subprocesses enough to administer
their own subgoals safely.

Or is this serial model somehow optimal for this reason?

PS - Ever think about how our brains are parallel, but our
consciousnesses are serial...nah, probably not related...

(Bad Durant! Stop posting! Do work!)

--
Durant.
 


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT