RE: All sentient have to be observer-centered! My theory of FAI morality

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Fri Feb 27 2004 - 00:12:56 MST


 --- Rafal Smigrodzki <rafal@smigrodzki.org> wrote: >
Marc wrote:
> > morality
> >
> >
> > My main worry with Eliezer's ideas is that I don't
> > think that a non observer-centered sentient is
> > logically possible. Or if it's possible, such a
> > sentient would not be stable. Can I prove this?
> No.
> > But all the examples of stable sentients (humans)
> that
> > we have are observer centered. I can only point
> to
> > this, combined with the fact that so many people
> > posting to sl4 agree with me. I can only strongly
> > urge Eliezer and others working on AI NOT to
> attempt
> > the folly of trying to create a non observer
> centered
> > AI. For goodness sake don't try it! It could
> mean
> > the doom of us all.
>
> ### Marc, remember that every single human you have
> met is a product of
> evolution, and replicates his genes autonomously
> (not vicariously like a
> worker bee). Self-centered goal systems are a
> natural result of this
> evolutionary history. Making an FAI is however
> totally different from
> evolving it - and the limitation to self-centered
> goal systems no longer
> applies. In fact, it would be a folly to abide by
> this limitation, and
> non-observer-centered systems should have a much
> better chance of staying
> friendly (since there is no self-centered goal
> system component shifting
> them away from friendliness).
>
> Rafal
>

Yeah Rafal,

I don't regard the evolutionary arguments as very
convincing. They're based on observation, not
experiment. Besides, it's only very recently in
evolutionary history that the first sentients (humans)
appeared. It's the class of sentients that is
revelent to FAI work. Evolutionary observations about
non-sentients is not likely to say much of relevence.

In any event, I don't regard non observer based
sentients as even desireable (See my other replies).
If you strip out all observer centered goals, you're
left with normative altruism. All sentients would
converge on this, and all individual uniqueness would
be stripped away. You'd be left with bland
uniformity. An empty husk. Universal morality is
probably just a very general set of contrainsts, and
FAI's following this alone would be qute unable to
distinguish between the myraid of interesting personal
goals that are consistent with it. Everything that
didn't hurt others (assuming that Universal Morality
is volition based) whould be equally 'Good' to such an
FAI. There would be no possibility of anything
unquinely human or personal. For instance the two
outcomes 'Rafal kills himself', 'Rafal doesn't kill
humself' would be designated as morally equivalent
under Volitional Morality.

In short, totally non-observer centered FAI's just
wouldn't make interesting drinking buddies.

Now, let's get back to bashing Dr J's socialism ;)

=====
Please visit my web-site at: http://www.prometheuscrack.com

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT