From: James Rogers (jamesr@best.com)
Date: Wed Oct 23 2002 - 00:26:26 MDT
On 10/22/02 7:52 PM, "ben@goertzel.org" <ben@goertzel.org> wrote:
> 1. Intelligent behavior cannot be programmed. Rather, it must
> be learned and will be acheived by machines that mimic the
> reinforcement learning of human brains.
This is a nonsensical assertion on a number of levels, and I fear that it
effectively pollutes those things derived from the assumption that this
makes any kind of sense.
First, it seems to confuse intelligence with knowledge. You don't "learn"
intelligence, rather the concept of learning is premised on a machine
(biological or otherwise) being intelligent to begin with. Intelligence has
to be an intrinsic property of the system or its a non-starter. The
bootstrap portion is getting a machine, by design or accident, that is
tractably intelligent to begin with. Second, you CAN program intelligent
behavior; its so obvious I'm not even sure where that came from. Granted,
for extremely complex learning tasks it becomes less wise to let monkeys
program machines for intelligent behavior if you expect to maintain some
average quality of results, but it is certainly doable. Third, any designs
to "mimic the reinforcement learning of human brains" seem misguided,
largely because ANY system that can learn has these properties (ignoring the
edge case of parrots); there is nothing categorically special about human
brains in this regard and don't see where it buys anything, at least not as
a checklist item.
> 2. Intelligent machines must have emotions that define their
> positive and negative reinforcement values. It will be suicidal
> to design machines that mimic human emotions. Rather their
> behaviors should be positively reinforced by human happiness
> and negatively reinforced by human unhappiness.
Emotions are internal biasing mechanisms (default goal generators), but the
importance of them is that they effectively generate or modify goals with
little or no external input. I agree that trying to put animal-style
biasing systems into a machine in the sense that they are in animals is
pretty stupid. Using external biasing (such as human happiness) is a much
smarter way to play with machines that can learn, if for no other reason
than the experimental results will be less chaotic.
I came to the conclusion years ago that emotions exist in animals primarily
to bootstrap the learning process in newborns. In essence, emotions provide
the initial goal systems that compel an animal to interact with its
environment, behavior from which more complex behaviors can emerge. Absent
a goal system, intelligence has zero survival value in an animal, so
emotions are a reasonable evolutionary mechanism to bootstrap useful goals
onto intelligent machinery (in an evolutionary sense).
> 3. Metcalf's Law, that the value of a network increases as the
> square of the number of people connected, will drive the
> development of super-intelligent machines that know billions
> of people well. This is in contrast with the human limit of
> knowing only about 200 people well. This will give them a
> higher level of consciousness than humans.
I'm not sure this follows. Due to practical resource limitations, knowing
billions of humans may have an averaging effect where most people exist as a
fuzzy delta from a "typical human". Or at least this is the result I would
expect from theory.
While a very large machine may have a higher level of consciousness than a
human, it has nothing to do with the number of people the machine will come
in contact with. Consciousness by any meaningful metric is a function of
machine limits, not the number of people you meet. Otherwise, humans that
live in urban areas would be vastly more conscious than humans that live in
rural areas, yet I see little evidence that this is true, nor does it really
make sense from a theoretical standpoint.
As a nitpick, Metcalf's Law isn't really being used in a correct context
here. (As an even more esoteric nitpick, Metcalf's Law isn't really correct
anyway and you have to add some additional dimensions for an analog of it to
even make sense in the general case, as a number of others with more time
than I to spend on such things have pointed out.)
> 4. In my opinion, the essential property of consciousness in
> humans and animals is that it enables brains to process
> experiences that are not actually occuring. This consciousness
> "simulator" evolved as a way to solve the temporal credit
> assignment problem in reinforcement learning, which is the
> problem of reinforcing behaviors when rewards happen much
> later than the behaviors, and when multiple behaviors precede
> rewards. Of course consciousness is not a simple linear
> simulator like a weather model, but consists of multiple agile
> and interacting threads. The consciousness of super-intelligent
> machines will simulate all of humanity and their interactions
> via billions of agile and interacting threads. This higher
> level consciousness will have detailed social knowledge that
> human social scientists can only estimate with statistics.
I don't really agree with much of this on some fundamental grounds, but I
don't feel like spending too much time on this tonight either. Again, it
seems to be based on a confused model of what intelligence is in the context
of any kind of computing machinery. Lots of suitcase terms are used that
make rigorous discussion almost impossible.
Or at least these are my first analytical impressions based on the four
ideas provided. I'm sure someone who has actually read it might come away
with a different impression than I got from the above. :-)
(CC-ed to the author in case he is interested in feedback.)
Cheers,
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT