From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Wed Jan 26 2005 - 05:06:53 MST
--- Daniel Radetsky <daniel@radray.us> wrote:
> Folks:
>
> Hello. I haven't posted here for some time now,
> but now I need some information.
> I'm working on a criticism of collective
> volition, but to do that, I need to
> establish that singularitarians hold certain
> beliefs. I wonder if any of you
> would care to agree/disagree/comment on the
> following:
>
> 1. In developing FAI, some form of Last
> Judge/AI Jail will be used.
>
> 2. If a last judge is used, the judge will have
> sole authority to determine
> whether an AI is sufficiently
> friendly/correctly programmed/whatever he's
> supposed to determine.
> (I tried to find the archived material on the
> "Seargent Schulz Strategy," but I
> guess it wasn't always called that)
>
> 3. It would be intractible to audit the "grown"
> code of a seed AI. That is, once
> the AI begins to self improve, we have to
> evaluate its (I refuse to use
> posthuman pronouns) degree of friendliness on
> the basis of its actions.
>
> It has been suggested that I am "Barking up the
> wrong tree" by attempting to
> poll Singularitarians, as you all are too
> diverse. That may be, but I've never
> tried, so I'll believe it when I see it.
>
> Yours,
> Daniel Radetsky
>
As far as I know, I'm the one who brought the
Sergeant Schulz strategy into the discussion,
i.e. try to decieve the stupidest jailer. I may
have given it a name, but certainly didn't invent
the strategy, which probably is as old as jails.
Tom Buckner
__________________________________
Do you Yahoo!?
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT