Re: FW: Group releases "Friendly AI guidelines

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Apr 19 2001 - 14:24:41 MDT


Ben Goertzel wrote:
>
> > He's talking about Eurisko, the first self-improving AI in the entire
> > history of time. My statement was that Lenat should, at that point in
> > history, have evaluated a 5% chance that the thing would go thermonuclear;
> > it was, after all, the first mind in all history with access to its own
> > source code. In retrospect, and with what we now know about the
> > dead-endedness of classical AI, it's obvious that Eurisko had a 0.1%
> > chance, if that, but Lenat had no logical way of knowing that; when he
> > booted up Eurisko it was literally the most advanced self-modifying mind
> > in history. Now we know more about how self-modifying minds behave and
> > why self-modifying heuristics run out of steam... but we didn't know it
> > *then*.
>
> I'm sure that Lenat, then, knew enough about his AI system to know that
> there was a roughly 0% chance of anything like the Singularity happening
> with it.

"'Possibly,' he conceded with equanamity." But every time you write a
computer program of a kind that never existed before, there's a sense in
which you really don't know what it's capable of. You know that it's only
so smart at any given time, but you don't know if it's right on the edge
of a self-improvement curve that goes right off to infinity. I can easily
see Lenat estimating the probability as being very close to zero, and
justly so, but not actually zero. It wouldn't have been *that* hard to
put in an improvements counter and something that halted and paged Lenat
if more than a thousand improvements went by without further user input.
This is hardly a full-featured resilient Friendliness system, but it's
just that tiny increment better than nothing, which is what's needed for a
system which has a tiny increment of a chance of doing something
unexpected. And Lenat could probably have done it in thirty minutes, so
it's not like it would have been a major project.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT