RE: Regulating AI Development

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Mar 01 2002 - 16:32:49 MST


> > it could have generated profits to fund some of our real AI
> > development (which was "Friendly" in nature, although I don't
> > share all of Eli's views on goal system architecture). But, well...
> > just another example of how the individuals and organizations
> > controlling the resources do not value this sort of thing.
>
> Which views *do* you share? Last time I checked, you disagreed with the
> concept of a Friendliness-topped goal system, and you can't do
> much without that...

Not all AI architectures have an explicitly "anything-topped" goal system,
Eliezer.

In Novamente, Friendliness is one goal among many. The relative importances
of the different goals are allowed to shift dynamically, but in practice
after a while they will usually settle into a roughly constant "importance
balance."

["Importance" being a technical Novamente term, a quantity associated with
Nodes and Relationships in the system. It
determines roughly how much CPU time a Node or Relationship gets, and is
governed by a special nonlinear-dynamical equation called the Importance
Updating Function, which depends on many factors that I'm not going to
describe here.]

If you want the system to settle into a configuration in which Friendliness
is a highly important goal, you've got to interact with it in an appropriate
way.

I believe that the balancing of goals must dynamically emerge from of the
overall balancing of structures and processes in the system. I don't
believe in "hard-wiring in" Friendliness as a supergoal, in the way that you
propose. This would not work well in the context of the Novamente
architecture, and my intuition says that it contradicts the intrinsically
self-organizing and fluid nature of mind in general.

The method is: Give the system the smarts to recognize when it's being
Friendly. Interact with it in a way that teaches it that Friendliness is
important. Then GoalNodes corresponding to Friendliness will
"spontaneously" become important.

> Speaking of which, as a human, I'd like to ask that you write up
> Novamente's
> proposed Friendliness architecture and publish it to the Web.

In time, my friend, in time....

The Novamente "goal architecture" can't be clearly explained in any detail,
without also explaining most the rest of the architecture; that's the way
the system is... the parts all interpenetrate and interdepend.
[I.e., GoalNode extends CompoundRelationNode extends SchemaNode ... it's a
looong story. The dynamics of GoalNodes are general Novamente cognition
dynamics with particular parameter settings. ]

We tentatively plan to publish a book describing the Novamente architecture,
but it won't be this year. We want the book to be *just right*, and so
we're going to circulate the manuscript to close associates for detailed
criticism and feedback first (later this year), before seriously considering
releasing it to the general public.

Incidentally, GoalNode is not even implemented yet in Novamente (though we
did some mucking-around with a slightly different kind of GoalNode in
Webmind, in 2000). Our current focusing is in getting the basic cognition
methods to work properly together, which does not require the system to set
its own goals explicitly using GoalNodes, though it does require other
things that may be construed in a sense as dynamic goal-setting.

I know you have explicitly articulated your idea for a goal architecture for
an AI system. However, you have not articulated, so far as I know, a set of
cognitive dynamics that go along with this goal architecture. Thus it
remains very undemonstrated that your goal architecture is actually
compatible with highly intelligent cognition. I am somewhat skeptical on
this point.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT