From: Charles D Hixson (charleshixsn@earthlink.net)
Date: Sun Feb 26 2006 - 20:33:11 MST
On Sunday 26 February 2006 08:02 am, Ben Goertzel wrote:
> > We have actually ALREADY reached the point where many computer programs
> > self-modify. I can practically guarantee that they aren't friendly, as
> > they don't even have any model at all of the world external to the
> > computer. It's hard to be friendly if you don't know that anyone's
> > there. (A chain of increasingly less relevant examples of self modifying
> > code follows.)
> >
> > P.S.: Check out: Evolutionary algorithms, Genetic Programming, Self
> > Optimizing Systems, and Self Organizing systems. You might also look
> > into Core War and Tierra. These are all (currently) quite primitive, but
> > they are actual self-modifying code.
>
> Hi,
>
> I'm very familiar with these techniques and in fact my Novamente AI
> system uses a sort of evolutionary algorithm (a novel variant of
> Estimation of Distribution algorithm) for hypothesis generation.
>
> Technically, most evolutionary algorithms do not involve
> self-modifying code, rather they are learning-based automated code
> generation.
>
> Tierra does involve self-modifying code of a very simple sort, but not
> of a sort that could ever lead to mouse-level intelligence let alone
> human-level or superhuman.
>
> Of course, not all self-modifying code has nontrivial hard-takeoff
> potential; and an AI system need not possess self-modifying code in
> order to have nontrivial hard-takeoff potential.
>
> Sorry if I spoke too imprecisely in my prior e-mail,
> Ben
I'm not sure that you spoke too hurriedly, when speaking for yourself...but
you aren't alone.
Perhaps I'm projecting from my own experiences. I tend to consider that a
system which generates code automatically is modifying it's code if it then
runs the code. Perhaps you mean that you read over the code between each set
of runs. I did something like that in land-use modeling during the 1980's.
[The approach was retired when we switched to PCs that didn't have enough
horsepower to run the models...and though now they do, the organization has
been restructured.] In this case all that was done was adjusting the weights
of "trip costs" between zones, but the computer did it automatically, and
even though it was technically feasible to examine them, only scattered
checks were ever made. Occasionally we would stack several runs in series to
do repeated projections to estimate further into the future, and I thought of
this repeated series of runs as a single program run to multiple decades.
(We weren't right, but we captured the general trends.) I'll grant (EASILY)
that this program wasn't intelligent in any meaningful sense, but it was
definitely "self modifying" as I think of the term.
It is my expectation that when a company runs a project, the managers won't
understand the details, and the programmers won't understand the context.
This will frequently result in problems that could be avoided, but aren't.
When I'm assuming that the companies are building increasingly intelligent
programs...thinking things work this was causes me to be nervous. Corners
regularly get cut where they won't show, as people skip parts of what they
are supposed to do because it's boring, or late for a bus. This causes me to
expect that when mandated examination of code becomes routine, if corrections
are very rarely needed, it will tend to be skipped. Particularly if the
errors are subtle, so that skipping isn't even the right word. "Done without
sufficient attention" might be better. Think of the scene in "The Andromeda
Strain" where the researchers are testing samples...and one slips through.
Naturally, most such mistakes won't cause any significant problem, so they'll
be repeatedly detected and the result will be "Well, no harm done". (I'm
assuming that quality control catches anything that blatantly wrong. Every
iteration passes the "stress test"s.) However ...
I also assume that anything people do has an error rate. That there comes a
point when one decides that the machine's judgement about the quality of a
product is better than the human expert. And that THAT has an error rate.
To me an engineer's "This appears to be a friendly AI" is better assurance
that a government's stamp of approval, but certainty won't be available. I'm
not sure that I trust a DoD stamp of approval. One of the things they want
is a robot soldier, and that's a bit hard to square with friendliness.
Similar considerations apply to the other branches of the government...and
they're doing, or at least funding, most of the work.
So... I see the singularity coming, and I'm not thrilled. The dangers
inherent in it are so great that the only thing comparable is having a bunch
of psychotic apes controlling planet-killing weapons of mass destruction.
(Oops!) Well, if I had a choice, I'd choose the singularity. But I don't
find it a happy choice. (OTOH, I don't have a choice. I get to live with
one condition until the second one occurs. Then I get to hope to live
through it.) I suppose seeing things this way causes me to be a bit
dyspeptic.
Friendly AI is the big hope. It's not the only one...some neutral AGIs
wouldn't be impossible to live with. (Well, not really neutral, but also not
measuring up to "Friendly", either. Consider the "Accelerando" scenario. The
AIs there weren't exactly friendly, but it was possible to live with them. I
feel that this kind of occurrence is more probable, and less desireable [to
us], than a genuinely friendly AI.)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT