Re: Not the only way to build an AI [WAS: Please Re-read CAFAI]

From: Richard Loosemore (
Date: Wed Dec 14 2005 - 08:53:01 MST

Eric Rauch wrote:
> Given your skepticism of goal guided agi what do you think of the
> goaless agi I suggested.

An AGI cannot be completely goalless, because the more you try to
exclude *any* kind of goal mechanism, the more you make it into a
conventional program that, because if its simplicity, is simply not
going to have the flexibility to be intelligent.

The tricky part is that "goal system" has many interpretations, and I am
not denying the need for some kind of goal system, because that is the
mechanism that causes the system to organize its model-building (the
activity that senses the world, builds models of what is going out
there, and uses those models to take actions .... all of thinking, in
other words). Without a goal mechanism of some sort, the AGI just sits
there having random thoughts about whatever takes its fancy, and that
kind of creature would never get smart in the first place. My specific
grouse is that people sometimes have a very narrow intepretation of what
that goal system is, and so delude themselves into thinking that it
works in a very deterministic way (and hence can be set up in such a way
as to "guarantee" friendliness). I think that that kind of goal system
would not work, but a motivation/goal system, in which there were
general motivations driving the overall behavior, as well as a flexible
goal system that loosely governs the moment-to-moment processes, is the
way we will eventually have to build the AGI.

Answering your specific question: I can see how one could build
quasi-intelligent systems that did some knowledge acquisition (I would
call it a "drone") without having enough of a goal system to know what
it was doing (without being self-aware). Rather like an extreme version
of a human savant. But that is a slightly different issue... not quite
a missing goal system of the sort you were after, just an impoverished
goal system, and with other stuff missing. But I think such a thing
would have to be governed (managed, operated, controlled) by a real AGI.
  What I have in mind here is the idea that a real AGI could create vast
numbers of drones that did specialized research on particular tasks
without having much awareness of the nature of self, so they would be
like the things we call "number crunchers", except the AGI would call
them "concept crunchers". Maybe such a system would satisfy some of the
requirements you had in mind when you talked of a goalless AGI, although
it would be no good if it had to have a real AGI to supervise it.

> Also I'm curious as to how the members of this list maintain such a
> high degree of certainty about the behavior of post singularity
> intelligences which are almost by definition supposed to be beyond our
> comprehension (richard this is not directed at you)

I will comment anyway ;-). I think you have put your finger on one of
the big, glaring inconsistencies in the discussions that take place
here. I believe the root of this is the same dogmatic attachment to the
"Neat" (mostly decision-theory-based) approach to AI. (cf the Neats vs
Scruffs war .... I count myself as a NeoScruff).

Richard Loosemore

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT