Re: META: Dangers of Superintelligence

From: Daniel Radetsky (
Date: Mon Aug 30 2004 - 09:46:40 MDT

On Sun, 29 Aug 2004 21:46:33 -0400
Eliezer Yudkowsky <> wrote:

> Daniel Radetsky wrote:
> >
> > Eliezer:
> >
> > Could you give an example of one or two of the ways you would beat someone you
> > were much smarter than? I'm fairly skeptical at this point.
> >
> > Daniel Radetsky
> At tic-tac-toe? I don't think that convincing someone to let me beat them
> at tic-tac-toe is any harder than convincing someone to let me out of the
> AI Box. It would be easier, in fact, because the spirit of the tic-tac-toe
> experiment is such that I could do all sorts of things that I wouldn't do
> in an AI Box because it would contradict the spirit of the game. I could
> offer bribes, I could get the other player drunk, I could tamper with the
> Java applet we were playing, I could edit a video tape of the game and make
> everyone else think I'd won, I could tell you that you were playing X when
> you were supposed to be playing O, etc.
> I think this is a good example of what it looks like to be substantially
> smarter than your hapless target:
> --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for Artificial Intelligence

Okay, but those are all examples of cheating. Intelligence will help you cheat,
but it won't make you any better at playing tic-tac-toe \begin{pretension} qua
game \end{pretension}. I think the point that "Father John" was trying making
was that if you're actually playing tic-tac-toe (as opposed to cheating), it
doesn't matter how much smarter you are than me, if I'm smart enough. He seems
to argue that the same goes for the single-shot AI-box; that he doesn't need to
be very smart at all, just smart enough to say "I'm still not letting you out."
However, This misses the point.

Daniel Radetsky

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT