From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Thu Dec 14 2000 - 21:41:57 MST
Ben Goertzel wrote:
> > I would not call an intermediate goal a supergoal - that term should be
> > reserved strictly for the top layer. Rather, a layer-3 subgoal turns out
> > to be more important than the layer-2 subgoal, under - and this is the key
> > point - the ultimate arbitration of the layer-1 supergoal. The L3 subgoal
> > turns out to be useful not just for the particular L2 subgoal that spawned
> > it, but for another, higher-priority L2 subgoal, or perhaps even an L1
> > supergoal. You can't necessarily extrapolate from an L3 subgoal
> > overthrowing an L2 subgoal to conclude that an L2 subgoal can overthrow an
> > L1 supergoal.
> > Now, after all that, I'll also turn around and say that, in the human
> > mind, a subgoal can overthrow a supergoal!
> I don't understand why you don't think this contradiction completely
> invalidates your train of thought...
Because the human mind has an architecture which admits of no goal
hierarchy at all, or even a directional goal network. The human mind has
a grasp of the relation supergoal-of and subgoal-of; there's no
evolutionary need for things to be organized any more neatly than that.
And change propagation proceeds slowly relative to the speed of conscious
A Friendly AI might be trained to refine and build up its Friendliness
supergoals; there will be subgoals associated with that project; so in a
technical sense, subgoals are affecting supergoals. But they aren't
promoting themselves. They are, to use an invalid social analogy, working
together for the common good on the Supergoal Project.
It should always be possible to choose a perspective which eliminates the
subgoals entirely, leaving only the consequences of supergoals.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT