RE: FW: Group releases "Friendly AI guidelines

From: Ben Goertzel (
Date: Thu Apr 19 2001 - 14:16:41 MDT

> He's talking about Eurisko, the first self-improving AI in the entire
> history of time. My statement was that Lenat should, at that point in
> history, have evaluated a 5% chance that the thing would go thermonuclear;
> it was, after all, the first mind in all history with access to its own
> source code. In retrospect, and with what we now know about the
> dead-endedness of classical AI, it's obvious that Eurisko had a 0.1%
> chance, if that, but Lenat had no logical way of knowing that; when he
> booted up Eurisko it was literally the most advanced self-modifying mind
> in history. Now we know more about how self-modifying minds behave and
> why self-modifying heuristics run out of steam... but we didn't know it
> *then*.

I'm sure that Lenat, then, knew enough about his AI system to know that
there was a roughly 0% chance of anything like the Singularity happening
with it.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT