Re: How do you know when to stop? (was Re: Why playing it safe is dangerous)

From: Ben Goertzel (
Date: Sun Feb 26 2006 - 09:02:22 MST

> We have actually ALREADY reached the point where many computer programs
> self-modify. I can practically guarantee that they aren't friendly, as they
> don't even have any model at all of the world external to the computer. It's
> hard to be friendly if you don't know that anyone's there. (A chain of
> increasingly less relevant examples of self modifying code follows.)
> P.S.: Check out: Evolutionary algorithms, Genetic Programming, Self
> Optimizing Systems, and Self Organizing systems. You might also look into
> Core War and Tierra. These are all (currently) quite primitive, but they are
> actual self-modifying code.


I'm very familiar with these techniques and in fact my Novamente AI
system uses a sort of evolutionary algorithm (a novel variant of
Estimation of Distribution algorithm) for hypothesis generation.

Technically, most evolutionary algorithms do not involve
self-modifying code, rather they are learning-based automated code

Tierra does involve self-modifying code of a very simple sort, but not
of a sort that could ever lead to mouse-level intelligence let alone
human-level or superhuman.

Of course, not all self-modifying code has nontrivial hard-takeoff
potential; and an AI system need not possess self-modifying code in
order to have nontrivial hard-takeoff potential.

Sorry if I spoke too imprecisely in my prior e-mail,

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT