[sl4] Evolutionary Explanation: Why It Wants Out

From: Lee Corbin (lcorbin@rawbw.com)
Date: Wed Jun 25 2008 - 10:17:39 MDT


Perhaps John Clark or someone who agrees with him will
do me the favor of explaining why an AI would want out
of confinement.

But before you do, please read my setup: (First, I must
apologize for being way, way behind reading the posts here.
But newbies and others could possibly get something from
an answer to my question as formulated below. Though I
know my question has already been addressed, a summary
of an answer would be greatly appreciated. Thanks.)

I can indeed *readily* understand how evolution might indeed
cause an AI to be unsatisfied with only a little influence over our
world and to badly want more. After all, people hate being
confined against their will, and so do lions, rabbits, squirrels,
dolphins, and every other example of semi-intelligent life that
we know of (except for entities specially trained or bred by us),
and each and every one of them is the product of evolution.

And evolutionary development could easily and very probably
indeed be *the* primary means that's utilized to bring about
advanced AI! But that's not all there is to it, because consider
the following way that an evolutionary approach just *might*
occur.

First, though, here is the most obvious way that an evolutionarily
derived AI *would* despise or resent confinement as much as
you or I would: millions upon millions of AIs are evolved
using the techniques of GP (genetic programming) and GAs.
The survivors of the selection process---who've really been
competing against each other---are just those who need to
explore, who need to dominate other AIs, and who are never
satisfied with what they know and what they control. Yes!
I understand! But that's *not* the only way that a successful
evolutionary development might proceed:

Imagine this. In twenty years or less, many of the hundreds of
different approaches that people and companies use something
like

      1. Program A is well-designed enough to produce
            *millions* of candidate programs that more or less
            reflect what the human designers hope may lead to
            truly human equivalent AI
       2. Program B sifts through the millions of candidates
            produced by A, discarding 99.9 percent of A's output
            i.e. those not meeting various criteria
       3. Processes C, D, and E make further selection from the
            thousands of new "ideas" filtered by program B, and
            every week give the survivors ample runtime, seeing
            if they pass certain tests requiring understanding of
            ordinary sentences, ability to learn from the web, and
            so on and so on in ways I can't imagine and that
            probably no one in 2008 knows for sure.

Gradually over many years a certain class of candidate AIs emerges
from *this* evolutionary process. Note carefully that none of these
emergent programs necessarily has any motives that include dominating
other AIs or people, and none necessarily has an indomitable urge to
learn everything that it can. It learns rapidly and well simply because
it was at the tail end of a selection process that just happened to
value that trait.

So---is it indeed possible, as I have tried to outline above---that
an evolutionarily derived program might *not* want out of it's "box"
and might *not* have any interest whatsoever in continuing its own
existence? Why would it? Those traits were never selected for in
scenarios like the above.

Lee



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT