Re: Fighting UFAI

From: Russell Wallace (russell.wallace@gmail.com)
Date: Thu Jul 14 2005 - 17:01:25 MDT


On 7/14/05, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
> Okay. Let me know if you have anything new to contribute to the conversation
> that started here:
>
> http://www.sl4.org/archive/0401/#7483

Hmm... actually I think within the assumptions that the participants
in that conversation appeared to be making, you're right.
Specifically, I think it's about the speed and timing of takeoff.

I think your vision of the default scenario (the one most likely to
occur unless people see it as an existential danger and take steps to
avert it) is as follows (correct me if I'm wrong):

1) An AI program in someone's basement crosses a threshold that allows
it to undergo "hard takeoff" and begin self-improving at a very high
and sustained rate, so that it can become more powerful than the rest
of the world before any effective reaction can take place.
2) This happens relatively early, perhaps in the next few years, at
any rate in a world that isn't already full of proto-AIs, uploads,
nanotech weapons etc that might have a chance of catching up.
3) Intelligence per se is in any case enormously powerful, so much so
that if the initial AI becomes smarter than everything else, it can
defeat everything else even if heavily outclassed in material terms.

The combination of these three things implies that the far future
population will have evolved from a single, compact core, and the
combination of 1 and 2 mean that the core will have started with a
simple, low-entropy goal system. Given that, yes, Darwinian evolution
may well not apply.

My vision of the default scenario differs on all three counts. I think
there is no single threshold beyond which the rules abruptly change;
if we ever do have "each hour now longer than all the time that went
before" in Vinge's memorable phrase, that will require highly
developed ultratechnology, which means it will be late, which means
ultratechnology will be widely available. I also think intelligence is
not quite as dominant over numbers as you think it is, so a single
entity that got ahead could still be pulled down.

Thus it will resemble previous transitions (biological, cultural,
technological) in that the population going into the transition will
be large, with very high total entropy; there will be no single goal
system, and no single core to preserve it; thus, Darwinian evolution
will dominate the overall dynamics because it will be the _only_ force
with global scope; thus, the population will converge on a nonsentient
optimal self-replicator from many directions.

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT