From: Robin Lee Powell (firstname.lastname@example.org)
Date: Wed Jan 26 2005 - 00:31:17 MST
On Tue, Jan 25, 2005 at 08:45:06PM -0800, Daniel Radetsky wrote:
> Hello. I haven't posted here for some time now, but now I need
> some information. I'm working on a criticism of collective
> volition, but to do that, I need to establish that
> singularitarians hold certain beliefs. I wonder if any of you
> would care to agree/disagree/comment on the following:
> 1. In developing FAI, some form of Last Judge/AI Jail will be
Those are *utterly* different things. I disagree in both cases. A
Jail is insanely stupid, and a Last Judge is a weapon of absolutely
> 2. If a last judge is used, the judge will have sole authority to
> determine whether an AI is sufficiently friendly/correctly
> programmed/whatever he's supposed to determine.
No. The Last Judge accepts or refuses the *outcome* of CV, which is
> 3. It would be intractible to audit the "grown" code of a seed AI.
> That is, once the AI begins to self improve, we have to evaluate
> its (I refuse to use posthuman pronouns) degree of friendliness on
> the basis of its actions.
If we can audit its code, it's not enough smarter than us yet to be
-- http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/ Reason #237 To Learn Lojban: "Homonyms: Their Grate!" Proud Supporter of the Singularity Institute - http://intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT