Re: Fighting UFAI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jul 10 2005 - 20:09:03 MDT


Joel Peter William Pitt wrote:
> On 7/10/05, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
>
>>The point of no return is the enemy that is substantially smarter than you
>>are. Opposing an entity dumber than me, I have a real chance of winning no
>>matter what apparent advantages they start out with. Against a better mind
>>than mine, I would not expect to win regardless of material advantages.
>
> I'd have to disagree.
>
> Pure intelligence has use and greatly influences the odds of a battle
> when strategizing. But lets say we throw you up against 1000 ravenous
> rabid rats, these taken on their own I'm sure everyone would agree are
> less intelligent than Eliezer. The weight of numbers in warfare can't
> just be ignored.

If *you* throw me up against 1000 ravenous rats, maybe I lose. But here the
rats are only a tool; my true enemy is you. The *rats themselves* have no
ability to trap me in a losing battle, no ability to carefully plan to take me
by surprise. If it is just me and the rats, I can pick my own battleground
and show up in a Sherman tank. I will know we are at war long before they do.
  The weight of creativity in warfare can't just be ignored.

> Sure the intelligence difference between us and other
> mammals is probably very small in comparison to the difference between
> an AGI and us, but there would be around 5-6 billion of us vs a lone
> AGI, so I think it balances out (at least for the beginning stages of
> take off).
>
> There are of course differences to the situation we are considering
> and the above - since assumedly a threatening AGI would have some
> influence over the physical realm and may be able to create minions to
> assist it.

Assumedly a transhuman threatening AGI would have influence over the mental
realm and may be able to rapidly sway existing humans to assist it for around
the three days before it has its own nanotechnology. See also AI-Box Experiment.

> Back to Phillip's original question about when is it too late to do
> anything... well I think when di/dt (i = difference in intelligence)
> is faster than our ability to hold parlimentory sessions to decide
> what to do, then we are all doomed. ;P

Ja, shouldn't take long either.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT