Re: When Subgoals Attack

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Dec 24 2000 - 15:49:06 MST


Samantha Atkins wrote:
>
> "Eliezer S. Yudkowsky" wrote:
> >
> > Because the human mind has an architecture which admits of no goal
> > hierarchy at all, or even a directional goal network. The human mind has
> > a grasp of the relation supergoal-of and subgoal-of; there's no
> > evolutionary need for things to be organized any more neatly than that.
> > And change propagation proceeds slowly relative to the speed of conscious
> > thought.
>
> So how do you know that that lack of particular goal architecture isn't
> one of its strengths that self-consciousness and mental flexibility
> depends on? How do you know your AI will reach comparable consciousness
> and go beyond it with a more fixed hierarchical goal system?

Several reasons:

1) I am not proposing a more fixed hierarchical goal system. I am
proposing a goal network which is strictly directional and which strictly
distinguishes between supergoals and subgoals.

2) I do not know of any particular strength of self-consciousness or
mental flexibility which relies on goal directionality being local rather
than global. I do know that a lot of human philosophy rests on the useful
conflation of subgoals and supergoals, but this should be duplicable, and
in a more context-sensitive way, without the necessity for conflating all
goals into a single type.

3) My take on the human goal system is that it is unnecessarily complex
and contains a great deal of nonfunctional messiness. This does not mean
that a crystalline goal system is good. Goal systems for seed AIs will
need to be built up out of high-level thoughts, including fragments of
mental imagery that deal with multiple possibilities, and including
context-sensitivity and the ability to react to errors, and the ability to
plan for errors, and Friendliness will require considerable philosophical
consideration on top of that.

> But if the initial statement itself or rather its embodiment in goal[s]
> and subgoals is not adequate for its satisfaction

"Satisfaction"? Where did that come from?

> then there is little
> choice but to rearrange goals themselves. Reword, recast, reprogram.
> It is quite possible that former subgoals become elevated in this
> process. The supergoal/subgoal hiearachy is a tool, a way of organizing
> the system, it is not the Goal itself.

No, it is not the Goal, it is a probabilistic image of the Goal; and a set
of constraints - in the informational sense - on which Goal is being
referred to; and, ultimately, if all else fails, a specification of the
Goal. Nonetheless it is all the image that the system has at any given
point, unless genuine objective morality is discovered.

> Some problems may require "cheating" in that the goal system has strange
> linkages and loops that involve seeming or real contradictions to the
> design you propose.

Seeming contradictions are no problem. I defy you to name a challenge
which requires a real contradiction.

> Your above analogy would blow your system architecture apart. Over and
> over again we have found that individual freedom to pursue individual
> goals does more for the overall good than any sort of top down
> organization.

Really? Okay, I demand the immediate liberation of your visual cortex to
pursue the independent processing of whichever pixels it finds most
interesting. I can no longer tolerate your enslavement of a living
cognitive subsystem to your mere navigational requirements.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT