singularitarian premises

From: Daniel Radetsky (daniel@radray.us)
Date: Tue Jan 25 2005 - 21:45:06 MST


Folks:

Hello. I haven't posted here for some time now, but now I need some information.
I'm working on a criticism of collective volition, but to do that, I need to
establish that singularitarians hold certain beliefs. I wonder if any of you
would care to agree/disagree/comment on the following:

1. In developing FAI, some form of Last Judge/AI Jail will be used.

2. If a last judge is used, the judge will have sole authority to determine
whether an AI is sufficiently friendly/correctly programmed/whatever he's
supposed to determine.
(I tried to find the archived material on the "Seargent Schulz Strategy," but I
guess it wasn't always called that)

3. It would be intractible to audit the "grown" code of a seed AI. That is, once
the AI begins to self improve, we have to evaluate its (I refuse to use
posthuman pronouns) degree of friendliness on the basis of its actions.

It has been suggested that I am "Barking up the wrong tree" by attempting to
poll Singularitarians, as you all are too diverse. That may be, but I've never
tried, so I'll believe it when I see it.

Yours,
Daniel Radetsky



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT