Re: Seed AI milestones (was: Microsoft aflare)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Feb 27 2002 - 10:29:38 MST


Ben Goertzel wrote:
>
> Yes, I agree that my breakdown into possibilities A, B and C is a crude
> categorization, and that there are many many sub-possibilities with
> significant qualitative differences between them.
>
> A mathematical formalization of the different varieties of self-modification
> would be possible, but I'm not going to undertake that project today. Maybe
> next month!
>
> There is, however, a qualitative difference between (here I go again with
> crude categorizations):
>
> 1) self-modification like that of the human brain, in which the low-level
> learning algorithms and knowledge representation mechanisms are fixed, but
> high-level learning algorithms and knowledge representation techniques are
> learned and modified
>
> 2) self-modification like that of an uber-brain in which new
> neurotransmitters could be inserted, modifications to the conductive
> properties of neurons could be made, and so forth.
>
> 3) self-modification like that of a brain that could rebuild itself at the
> molecular level, replacing neurons, synapses and glia with entirely
> different sorts of structures

(excerpt)

At the same time, one should also be careful in drawing in drawing
analogies between "strongly" self-improving processes such as
recursively self-enhancing minds, and "weakly" self-improving
processes such as the development of the hominid line or the
accumulation of human cultural knowledge. The latter processes,
both roughly exponential in character, are characterized by an
external improving factor operating on a store of synergetically
interacting content. Humans increase their cultural knowledge, but
this increase in cultural knowledge has not altered the hardware of
human intelligence, although it has made it easier to develop
further cultural knowledge. Similarly, the development of the
hominid line did not change the basic character of the evolutionary
process, although the accumulation of genetic complexity apparently
made it easier for further complexity to develop. Both cases of
"weak" self-improvement involved the operation of an external
improvement process, already powerful, which did not further
increase in power during the timespan considered. Thus it is not
safe to conclude that internally-driven development in a recursively
self-enhancing intelligence will have an exponential lower bound
during its early stages, nor that it will have an exponential upper
bound during its later stages.

(/excerpt)

(1) and (2/3) seem to correspond to the distinction between weakly
self-improving processes - i.e., learning processes - and strongly
self-improving processes. The difference between (2) and (3) seems to be
the difference between consciously modifying cognitive content, and
consciously modifying source code.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT