Re: META: Dangers of Superintelligence

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sun Aug 29 2004 - 19:46:33 MDT


Daniel Radetsky wrote:
>
> Eliezer:
>
> Could you give an example of one or two of the ways you would beat someone you
> were much smarter than? I'm fairly skeptical at this point.
>
> Daniel Radetsky

At tic-tac-toe? I don't think that convincing someone to let me beat them
at tic-tac-toe is any harder than convincing someone to let me out of the
AI Box. It would be easier, in fact, because the spirit of the tic-tac-toe
experiment is such that I could do all sorts of things that I wouldn't do
in an AI Box because it would contradict the spirit of the game. I could
offer bribes, I could get the other player drunk, I could tamper with the
Java applet we were playing, I could edit a video tape of the game and make
everyone else think I'd won, I could tell you that you were playing X when
you were supposed to be playing O, etc.

I think this is a good example of what it looks like to be substantially
smarter than your hapless target:

http://www.somethingawful.com/articles.php?a=287

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:44 MST