From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jun 29 2002 - 18:02:53 MDT
Ben Goertzel wrote:
>
> The difference of opinion between us seems to be that I think there will be
> a moderately long phase in which we have an AGI system that:
>
> a) has an interesting degree of general intelligence, suitable for
> experimenting with and learning about AGI
>
> b) has no chance of undergoing a hard takeoff
>
> You and Eliezer seem to assume that as soon as a system has an
> at-all-significant degree of general intelligence, it's a nontrivial hard
> takeoff risk. As if, say, a "digital dog" is going to solve the hard
> computer/cognitive science problems of optimizing and improving its own
> sourcecode!
No, we think that it is worthwhile to prevent trivial existential risks.
> I think we have confidence about different things. You and Eli seem to have
> more optimism than me that simple hard-takeoff-prevention mechanisms will
> work.
No, we think that any simple mechanism that has a chance of working
should be implemented. Not a certainty, a chance. You don't *need* a
certainty for the mechanism to be a good idea. All you need is a
*chance*. If there's a *chance* it will work, then you should do it.
If you have a small chance of preventing a small chance of a very large
disaster, then take it. It doesn't have to be a large chance of
preventing a small chance of a very large disaster for us to think it's
worthwhile. small * small * very large == worth doing
> And I seem to have more confidence than you that there will be a
> period of infrahuman AGI in which the risk of hard takeoff is very very very
> very low, in which all sorts of things to do with computer consciousness,
> hard takeoff prevention, intelligence measurement and AGI in general can be
> studied.
Ben, I also think that there's a period of infrahuman AGI in which the
chance of hard takeoff is very very very very low, and in fact, as you
recall, I don't think the Novamente described in your manuscript will be
capable of doing a hard takeoff, ever. That's what I *think*. There
are always surprises.
> It seems to me that you guys don't really accept the diversity of human
> ethical systems.
It seems to me that we are thinking about ethics in fundamentally
different ways. The diversity of human ethical systems is simply not
relevant to what I am saying, and the fact that you keep bringing it up
shows that I am not getting across.
> It is not possible to teach Novababy a "universal human morality or ethics"
> because no such thing exists.
It is not your job or mine to produce such an ethics.
> While I understand the need to temper my natural entrepreneurial,
> risk-taking streak in these matters, I think your criticism is a bit too
> strong here. You need to understand that my estimate of the current
> existential risk of Novamente having a hard takeoff is really
> infinitesimally small.
So is mine. But I don't know what surprises are embedded in the space
of self-modifying algorithms, and we have fundamentally different
pictures of how a hard takeoff works. I don't know how the dominoes are
spaced and I don't know which domino knocks over all the other dominoes.
One innocent-looking algorithm might produce an algorithm that
produced an algorithm that produced an algorithm et cetera. You would
need a map of the entire fitness landscape to actually *know* that a
hard takeoff wouldn't happen. I am not saying that Novamente might
knock over much bigger dominoes than you think; I am saying that the
dominoes might unexpectedly turn out to be all lined up. Just because I
visualize an actual hard takeoff as occurring when an AI learns to
solve, using general intelligence, the problem of designing cognitive
systems and implementing code for it, does not mean that all hard
takeoffs are constrained to occur in this way.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT