Re: FW: Group releases "Friendly AI guidelines

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Apr 19 2001 - 13:56:26 MDT


Arctic Fox wrote:
>
> Congratulations on the Wired article Eliezer. Could you expand on the
> part of the article quoted below? Have we really got as close as a 5%
> chance of reaching Singularity *today*? Or was that more artistic
> license on their part?
>
> > When one researcher booted up a program he hoped would be AI-like,
> > Yudkowsky said he believed there was a 5 percent chance the
> > Singularity was about to happen and human existence would be forever
> > changed.

He's talking about Eurisko, the first self-improving AI in the entire
history of time. My statement was that Lenat should, at that point in
history, have evaluated a 5% chance that the thing would go thermonuclear;
it was, after all, the first mind in all history with access to its own
source code. In retrospect, and with what we now know about the
dead-endedness of classical AI, it's obvious that Eurisko had a 0.1%
chance, if that, but Lenat had no logical way of knowing that; when he
booted up Eurisko it was literally the most advanced self-modifying mind
in history. Now we know more about how self-modifying minds behave and
why self-modifying heuristics run out of steam... but we didn't know it
*then*. Like I said: Different rules for "conservative" in Friendly AI
versus AI in general.

But I certainly said nothing whatsoever at the time. Even I didn't know
about the Singularity when I was four years old, which is when Eurisko was
booting up. So marking this down as a failed prediction is a bit
inaccurate.

Ah, well, there is still hope in the world...
http://missingmatter.net/

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT