Re: Seed AI milestones (was: Microsoft aflare)

From: Christian Szegedy (szegedy@or.uni-bonn.de)
Date: Wed Feb 27 2002 - 09:38:16 MST


Ben Goertzel wrote:

>A) paths that begin with unintelligent self-modification
>B) paths that begin with purposeful intelligent non-self-modifying behavior
>C) paths that begin with a mixture of self-modification and purposeful
>intelligent
> behavior
>
>Eli and I, at this point, seem to share the intuition that B is the right
>approach.
>I have been clear on this for a while, but Eli's recent e-mail is the first
>time
>I've heard him clearly agree with me on this.
>
>Eugene, if your intuition is A, that's fine. In this case something like
>Tierra
>(which demonstrates robust self-modification, not leading to instant death)
>may
>be viewed as a step toward seed AI. However, the case of Tierra is a mild
>counterargument to the A route, because its robust self-modification seems
>to be
>inadequately generative -- i.e. like all other Alife systems so far, it
>yields
>a certain amount of complexity and then develops no further.
>
I just ask: what do you mean by self-modification at all?

An implementation has many levels: if you have a single executable which
operates on some data
, then the source code is constant, but the data changes. If you change
or recompile the executable,
then the operation system is constant. If you recompile the operation
system, then
the instruction set of the processor remains constant. If you have an
FPGA and restructure it
then the basis-structure of the FPGA is constant. If you reengineer your
complete hardware, then
the laws of the physics stay constant. Of course finer distinctions are
also possible.

I guess, you argued that the machine-code representation of the
executable won't change in the
first phase of the development of AI. I agree, but I still think that an
intelligent entity must have a high
degree of flexibility. So, the question is not really whether self
modification is needed, but the type of self
modification needed. I also agree with Eugene that a somewhat
intelligent AI must possess
a high degree of freedom (space for improvement,ability to
learn,whatever you call it) and
roobustness on the same time. Balancing this two will probably be a very
essential aspect
of constructing an AI. One of the most sensible tasks at all. (But I
don't state that it is the only task.)
And this is, most probably, not a static issue: you can't say that some
given balance is optimal for all AIs.
It is imaginable, that in the first phase you'll have to "add" more and
more flexibility to your AI as
ve gets increasingly intelligent. Perhaps you will have a lot of steps
in increasing ver freedom, without
allowing to modify ver machine code executable. The last steps in
increasing ver freedom from a
constant executable to complete self-reengineering within the laws of
physics, will take much less
time.

It seems to be similar to the development of a child: first time he has
no freedoms at all, because
he would harm himself, but you can give him more and more freedoms as he
learns to survive them.

Of course this is merely philosophy. I just wanted to point out that we
don't have possibility A,B and C,
but possibility a0.0, ... possibility a0.2332 ... possibility a1.0.
(probably on a much higher dimensional scale.)

Best regards, Christian



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT