From: Mitchell J Porter (mjporter@U.Arizona.EDU)
Date: Mon Jun 04 2001 - 02:28:20 MDT
I don't think people realize how much of a blank slate an AI would be.
For a cognitive architecture along the lines of:
main () {
until (goalCondition(worldState) == TRUE) do {
chooseActionMostLikelyToRealizeGoalCondition();
performAction();
updateWorldState();
} ;
}
- *anything* can serve as a goal condition. If chooseAction...()
is smart enough, such a program deserves to be regarded as
superintelligent. One might say that the superintelligence in
such an entity is confined to the means rather than the ends.
But frankly I have trouble seeing how the *ends* (the goal condition,
goal system, value system, method of ranking possible futures,...)
can ever be regarded as intrinsically intelligent or intrinsically
stupid. For such an architecture, intelligence is a property of
the methods used to achieve the goals, not of the goals themselves.
This is not to say that there's no relationship between intelligence
and goals. For example, a certain degree of 'representational
intelligence' is necessary just to describe a goal state. And
a self-modifying AI which started with a particular supergoal might
well decide to change its goals, *if* the new goal system led to
the achievement of the original supergoal more effectively than
a goal system in which that original goal remains explicitly at
the apex. (Actually, I'm not sure that *supergoal* change would
ever be called for except in very anomalous circumstances. If your
supergoal is X, and you find that there is a higher power which
destroys anything whose supergoal is X, then you should change
your supergoal to a Y which has X as a subgoal, since that will
increase the chances of condition X being realized. But in a
generic social context of competitive self-interested entities,
it should be enough to make self-preservation, peaceful coexistence,
etc., prominent subgoals, in which case they will remain capable
of being overridden by a supergoal.)
Ben Houston said
> Do you really think they would be that uncaring?
My point is just that they could be.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT