Re: Volitional Morality and Action Judgement

From: Marc Geddes (
Date: Mon May 24 2004 - 00:49:50 MDT

 --- Eliezer Yudkowsky <> wrote: >
> The C-word ("consciousness"). The Q-word
> ("qualia"). That which causes
> us to mistakenly believe that if we think, therefore
> we must exist.
> That which our future selves shall come to define as
> personhood. What I
> want to say is just, "I don't want to hurt a
> person", but I don't know
> what a person is. If I could give a more specific
> definition, I would
> have already solved the problem.
> I need to figure out how to make a generic
> reflective Bayesian reasoner
> with a flaw in its cognitive architecture that
> causes it to be puzzled
> by the certainty of its own existence, and ask
> nonsensical questions
> such as "Why does anything exist?". Then I'll know
> what *not* to do.
> It worries me that our future selves may come to
> define personhood by
> reference to other qualities than the C-word, but
> there has to be
> somewhere to draw the line. Natural selection isn't
> an entity I
> sympathize with, and yet natural selection is a
> functioning, if
> ridiculously inefficient, optimization process. I
> figure that if I find
> a structure that I can exclude to definitely rule
> out the C-word, and if
> I also use pure Bayesian decision theory in place of
> pleasure and pain,
> that'll cover most of the bases.
> --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for
> Artificial Intelligence


I respectfully continue to point out that I think that
your approach to FAI is the equivalent of trying to
'square the circle'.

*General intelligence without consciousness?


*General intelligence that isn't observer centered?


*General intelligence without a self?


*Total altruism?


*FAI that ends up with a morality totally independent
of the programmers?


*FAI dedicated to 'saving the world'

Impossible AND mis-guided

More likely any general intelligence necessarily has
have: a 'self', consciousness, some degree of
observer centeredness, some non-altrustic aspects to
its morality, some input from the 'personal' level
into its morality, and helping the world would only be
a *secondary* consequence of it's main goals.

Of course, all this is only my opinion. Take it or
leave it. But don't say I didn't tell you so...

"Live Free or Die, Death is not the Worst of Evils."
                                      - Gen. John Stark

"The Universe...or nothing!"

Please visit my web-sites.

Science-Fiction and Fantasy:
Science, A.I, Maths :

Find local movie times and trailers on Yahoo! Movies.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT