Re: Domain Protection

From: Russell Wallace (russell.wallace@gmail.com)
Date: Sun May 08 2005 - 21:27:48 MDT


On 5/9/05, Ben Goertzel <ben@goertzel.org> wrote:
>
> Hey Russell,
>
> Seems like this is basically a variant of the well-worn "Sysop Scenario",
> isn't it? You want the FAI to be a sysop that allows a universe with
> multiple domains..

Yes. One way to look at it is: Eliezer's earlier proposal was for a
Sysop with volition at the individual level; his latest one is for
volition at the species level. I'm arguing that the latter puts all
our eggs in one basket, and the former distributes the eggs one atom
per basket, thereby destroying them in order to save them; the domain
idea can be regarded as in-between.

> Very well -- I agree that's a nice scenario to envision -- but it begs the
> question of how to construct an AI that
>
> a) will be powerful enough to acts as such a Sysop, yet
>
> b) can be relied upon to keep acting as such a Sysop instead of changing its
> mind and doing something nasty

Yes, that's why I described it as a theory of Friendliness _content_
as opposed to the (first and harder) problem of Friendliness
_architecture_.

> Basically, I see your proposal as making a (nice, but not terribly original)
> statement of a decent GOAL for post-Singularity cosmos, but not as telling
> us anything about how to ACHIEVE this goal in a sustainable way...

No indeed! I'm trying to figure that out. (While at the same time
trying to earn a living... *pant, sweat*... Eliezer IIRC reckons
that's flatly impossible and by the nine hells he's probably right;
still, all I can do is make the attempt!) Anyway, I'm afraid the
question of how to actually create reliably Friendly AI is one I don't
quite have an answer to just yet :)

- Russell



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:56 MST