Re: Domain Protection

From: Russell Wallace (russell.wallace@gmail.com)
Date: Sun May 08 2005 - 22:28:13 MDT


On 5/9/05, Ben Goertzel <ben@goertzel.org> wrote:
> OK, but even as a theory of a "desirable goal state", there are BIIIIIG
> unresolved issues with your idea, aren't there?
>
> For instance, to specify the goal state we need to define the notion of
> "sentience" or else trust the Sysop to figure this out for itself...
>
> Because, I assume you want the Sysop to give each sentient being choice of
> which domain to live in?
>
> This begs the question of how to define what is a "sentient being"?

Hmm, I should have been more explicit about what it does _not_ assume;
if I clarify, perhaps it will look better defined (albeit of smaller
scope).

I'm assuming the Sysop will _not_ know what constitutes a sentient
being, and we won't be able to formally define it either. This is the
big difference between domain protection and both of Eliezer's sysop
scenarios; I'm making more conservative assumptions about what will be
possible, and being more modest in the problems I try to solve.

For purposes of setting up the domains, the rule can be simple: each
and every human on Earth (the ones who are old enough to make a
choice, at least) gets to decide what domain they want to move to (or
stay on Earth, of course); that's an operationally adequate definition
of "sentient" for that purpose.

> Suppose I want to create a universe full of intelligent love-slaves... and
> suppose there aren't any sentients who want to live their lives out as my
> love-slaves. So I create some androids that *act* like sentient
> love-slaves, but are *really* just robots with no feelings or awareness....
> Or wait, is this really possible? Does sentience somehow come along
> automatically with intelligence? Does "sentience" as separate from
> intelligence really exist? What ethical responsibilities exist to different
> kinds of minds with different ways of realizing intelligence?

I don't have an answer to that one. I don't believe there is an answer
to it. It's going to come down to your conscience. As long as you're
in a domain that allows individuals to have large amounts of computing
power, there's no way to stop you doing the above (in virtual reality
at least) if you want to.

Domain protection is a theory of how (if and when Friendly AI is
created) to safeguard the future of sentient life. It does not aspire
to being a theory of how to safeguard the rights of every individual
sentient being that will ever exist. I don't believe that's possible.

Of course, I could be wrong. If someone _does_ come up with an answer
to your questions, by all means let that be added to domain
protection; they're not mutually exclusive.

But I don't think that will happen. I don't know how to create
superintelligent Friendly AI, but I think at this stage I can vaguely,
dimly see how it is possible. But super-_wise_ AI, as I said in the
paper, I think is a Godel problem.

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT