From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sat May 22 2004 - 21:46:47 MDT
Michael Roy Ames wrote:
> Eliezer,
>
> You wrote:
>
>>I was speaking of me *personally*, not an FAI.
>>An FAI is *designed* to self-improve; I'm not.
>>And ideally an FAI seed is nonsentient, so that
>>there are no issues with death if restored from
>>backup, or child abuse if improperly designed
>>the first time through.
>
> Your definition of 'sentient' must be substantially different from mine when
> you suggest that an FAI seed might be nonsentient. Could you give us your
> working definition for 'sentient'?
The C-word ("consciousness"). The Q-word ("qualia"). That which causes
us to mistakenly believe that if we think, therefore we must exist.
That which our future selves shall come to define as personhood. What I
want to say is just, "I don't want to hurt a person", but I don't know
what a person is. If I could give a more specific definition, I would
have already solved the problem.
I need to figure out how to make a generic reflective Bayesian reasoner
with a flaw in its cognitive architecture that causes it to be puzzled
by the certainty of its own existence, and ask nonsensical questions
such as "Why does anything exist?". Then I'll know what *not* to do.
It worries me that our future selves may come to define personhood by
reference to other qualities than the C-word, but there has to be
somewhere to draw the line. Natural selection isn't an entity I
sympathize with, and yet natural selection is a functioning, if
ridiculously inefficient, optimization process. I figure that if I find
a structure that I can exclude to definitely rule out the C-word, and if
I also use pure Bayesian decision theory in place of pleasure and pain,
that'll cover most of the bases.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST