Re: New Singularity-relevant book

From: Bill Hibbard (test@doll.ssec.wisc.edu)
Date: Wed Oct 23 2002 - 04:39:18 MDT


(Ben, I am not subscribed to sl4@sl4.org, so if my message doesn't
get posted there will you please forward it? Thanks, Bill)

On Tue, 22 Oct 2002, James Rogers wrote:

> On 10/22/02 7:52 PM, "ben@goertzel.org" <ben@goertzel.org> wrote:
> > 1. Intelligent behavior cannot be programmed. Rather, it must
> > be learned and will be acheived by machines that mimic the
> > reinforcement learning of human brains.
>
>
> This is a nonsensical assertion on a number of levels, and I fear that it
> effectively pollutes those things derived from the assumption that this
> makes any kind of sense.

I agree that it is nonsense to say that intelligence is
learned, but that's not what I said. I said intelligent
behaviors are learned.

> First, it seems to confuse intelligence with knowledge. You don't "learn"
> intelligence, rather the concept of learning is premised on a machine
> (biological or otherwise) being intelligent to begin with. Intelligence has
> to be an intrinsic property of the system or its a non-starter. The
> bootstrap portion is getting a machine, by design or accident, that is
> tractably intelligent to begin with. Second, you CAN program intelligent
> behavior; its so obvious I'm not even sure where that came from. Granted,
> for extremely complex learning tasks it becomes less wise to let monkeys
> program machines for intelligent behavior if you expect to maintain some
> average quality of results, but it is certainly doable. Third, any designs
> to "mimic the reinforcement learning of human brains" seem misguided,
> largely because ANY system that can learn has these properties (ignoring the
> edge case of parrots); there is nothing categorically special about human
> brains in this regard and don't see where it buys anything, at least not as
> a checklist item.

As to your second point, a programmed rather than learned
implementation of intelligent behavior is only slightly
less absurd than Searle's Chinese Room. Perhaps I should
not have used the absolute word "cannot", but in any
pratical sense what I said is true.

Third, I used the phrase "mimic the reinforcement learning
of human brains" just to make the point that intelligent
machines have more in common with human brains than with
current machines.

> > 2. Intelligent machines must have emotions that define their
> > positive and negative reinforcement values. It will be suicidal
> > to design machines that mimic human emotions. Rather their
> > behaviors should be positively reinforced by human happiness
> > and negatively reinforced by human unhappiness.
>
>
> Emotions are internal biasing mechanisms (default goal generators), but the
> importance of them is that they effectively generate or modify goals with
> little or no external input. I agree that trying to put animal-style
> biasing systems into a machine in the sense that they are in animals is
> pretty stupid. Using external biasing (such as human happiness) is a much
> smarter way to play with machines that can learn, if for no other reason
> than the experimental results will be less chaotic.
>
> I came to the conclusion years ago that emotions exist in animals primarily
> to bootstrap the learning process in newborns. In essence, emotions provide
> the initial goal systems that compel an animal to interact with its
> environment, behavior from which more complex behaviors can emerge. Absent
> a goal system, intelligence has zero survival value in an animal, so
> emotions are a reasonable evolutionary mechanism to bootstrap useful goals
> onto intelligent machinery (in an evolutionary sense).

I am following Francis Crick and Gerald Edelman in my use of
the word "emotion". They both say that emotions are essential
for intelligence based on the role of emotions for reinforcing
or selecting intelligent behaviors. Of course, "emotion" is an
overloaded term and you can find different neuroscientists who
use it in different ways.

> > 3. Metcalf's Law, that the value of a network increases as the
> > square of the number of people connected, will drive the
> > development of super-intelligent machines that know billions
> > of people well. This is in contrast with the human limit of
> > knowing only about 200 people well. This will give them a
> > higher level of consciousness than humans.
>
>
> I'm not sure this follows. Due to practical resource limitations, knowing
> billions of humans may have an averaging effect where most people exist as a
> fuzzy delta from a "typical human". Or at least this is the result I would
> expect from theory.
>
> While a very large machine may have a higher level of consciousness than a
> human, it has nothing to do with the number of people the machine will come
> in contact with. Consciousness by any meaningful metric is a function of
> machine limits, not the number of people you meet. Otherwise, humans that
> live in urban areas would be vastly more conscious than humans that live in
> rural areas, yet I see little evidence that this is true, nor does it really
> make sense from a theoretical standpoint.

The point isn't how many people we bump into on the street, or
even our number of acquaintances, but how many we can know well.
As I said "the human limit of knowing only about 200 people well",
which is discussed in the excellent book "Biology of Mind" by
Deric Bownds, available at http://dericbownds.net/.

The difference between human and animal consciousness can be
described in terms of whether animal minds include models of
other animals minds, of events tommorrow, etc. Similarly, I think
a key difference between human and machine consciousness will
be the machines' detailed model of billions of human minds, in
contrast to our detailed model of about 200 human minds.

Because of the physical limits of human brains, our models of
billions of human minds are averaged out. But machine brains
that exceed our physical limits will be have detailed models
of billions of human minds.

> As a nitpick, Metcalf's Law isn't really being used in a correct context
> here. (As an even more esoteric nitpick, Metcalf's Law isn't really correct
> anyway and you have to add some additional dimensions for an analog of it to
> even make sense in the general case, as a number of others with more time
> than I to spend on such things have pointed out.)
>
>
> > 4. In my opinion, the essential property of consciousness in
> > humans and animals is that it enables brains to process
> > experiences that are not actually occuring. This consciousness
> > "simulator" evolved as a way to solve the temporal credit
> > assignment problem in reinforcement learning, which is the
> > problem of reinforcing behaviors when rewards happen much
> > later than the behaviors, and when multiple behaviors precede
> > rewards. Of course consciousness is not a simple linear
> > simulator like a weather model, but consists of multiple agile
> > and interacting threads. The consciousness of super-intelligent
> > machines will simulate all of humanity and their interactions
> > via billions of agile and interacting threads. This higher
> > level consciousness will have detailed social knowledge that
> > human social scientists can only estimate with statistics.
>
>
> I don't really agree with much of this on some fundamental grounds, but I
> don't feel like spending too much time on this tonight either. Again, it
> seems to be based on a confused model of what intelligence is in the context
> of any kind of computing machinery. Lots of suitcase terms are used that
> make rigorous discussion almost impossible.

The statement "the essential property of consciousness in humans
and animals is that it enables brains to process experiences that
are not actually occuring" says something pretty rigorous. The
simplest animal brains can only process events as they happen.
But at some level of evolution, brains break free of "now".

And the temporal credit assignment problem is a well known
and rigorous problem. There has been some very exciting
neuroscience into how brains solve this problem, at least when
delays between behaviors and rewards are short and predictable,
in the paper:

  Brown, J., Bullock, D., and Grossberg, S. How the Basal Ganglia
  Use Parallel Excitatory and Inhibitory Learning Pathways to
  Selectively Respond to Unexpected Rewarding Cues. Journal of
  Neuroscience 19(23), 10502-10511. 1999.

This is available on-line at:

  http://cns-web.bu.edu/pub/diana/BroBulGro99.pdf

I think that the need to solve the temporal credit assignment
problem when delays between behaviors and rewards are not
short and predictable was the selectional force behind the
evolution of consciousness. Any known effective solution to
this problem requires a simulation model of the world.

> Or at least these are my first analytical impressions based on the four
> ideas provided. I'm sure someone who has actually read it might come away
> with a different impression than I got from the above. :-)
>
> (CC-ed to the author in case he is interested in feedback.)

Thanks for your comments.

Cheers,
Bill
----------------------------------------------------------
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
test@doll.ssec.wisc.edu 608-263-4427 fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT