Re: singularitarian premises

From: Robin Lee Powell (
Date: Wed Jan 26 2005 - 18:14:52 MST

On Wed, Jan 26, 2005 at 04:00:23PM -0800, Daniel Radetsky wrote:
> Therefore, it seems to me to be disingenuous to call your only
> weapon a weapon of last resort. Correct me, though; I'm not very
> well read in these areas.

Well, sure, if nothing interesting happens to FAI theory between now
and the CV making its decision.

I consider thing to have a probability of zero.

> Another thing: maybe I have the wrong picture of the Last Judge
> scenario. As far as I understood it, Last Judge necessitates an AI
> Jail of some type. That is, the AI is operating under some type of
> restriction (e.g. within a single mainframe, but also maybe some
> sort of software directive "You may not do X." The specific type
> of restriction is unimportant to the argument), which the Last
> Judge can remove.

Err, no. Not even close. The RPOP determines the CV of humanity,
and is then asked to show it to The Last Judge for approval. *Show*
it, as in indicate what it would look like and why, not enact it.
The Last Judge approves or disapproves. In the former case, the
RPOP goes and Makes It So. In the latter case, not. No jail is
ever involved; the RPOP's co-operation is required (as with any
useful FAI theory).


-- ***
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute -

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT