Re: Beating the rush

From: Peter C. McCluskey (
Date: Thu May 26 2005 - 10:13:11 MDT ("Eliezer S. Yudkowsky") writes:
>Have you read I ask for purposes of

 I read it a few months ago after Tyler told me it made an argument for
a hard takeoff. All I could find was an argument that we would have a
takeoff of some sort. It didn't look like you were trying to say how
fast it would happen.
 (I will try to comment on your objection to Baum, but I can't do it this

>You'd build a temporary goal system marked reflectively as an approximation to
>CEV (collective extrapolated volition). You can toss as much safety as you
>want into the temporary goal system, and the safeties will stay there until
>the CEV is defined enough to execute a complete rewrite and wash out all the

 This isn't clear enough to tell me whether it would resemble my idea of
an obedient AI, whether it would (temporarily?) impose some subset of
your opinions on the world, or something else.
 If this temporary goal system can be trusted to allow you to replace its
goal system with one based on CEV, then doesn't that imply that you've
made it benevolent and/or obedient enough that we could rely on it to
tell us how to do CEV (or some alternative) wisely? And shouldn't we then
conclude that most of our thought should go into ensuring that the temporary
RPOP goal system is as safe as it can be, since we should expect the RPOP
will tell us how to avoid the remaining risks?
 Yet the effort you have put into describing CEV versus the effort you have
put into describing the temporary RPOP goal system suggests you haven't
reached this conclusion.

>ornamentation and tinsel. I don't think humans could build an AI that had no
>goal system at all until it was already a superintelligence.

 I have trouble imagining what a goal-less AI would be like.

Peter McCluskey         | Everyone complains about the laws of physics, but no| one does anything about them. - from Schild's Ladder

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT