From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Thu Jul 14 2005 - 00:26:13 MDT
On Thu, Jul 14, 2005 at 06:02:48PM +1200, Marc Geddes wrote:
> >Unless your positing that a boundary exists such that one cannot
> >be smarter than the boundary without also being nice, and that
> >boundary is *above* the level of intelligence that humans have
> >access to.
>
>
> That's exactly what I'm positing Robin! (re-read what I said)
Oh, I read it. I was giving you the benefit of a doubt. I won't
make that mistake again.
This seems, to my intuition, to be some of the most amazing lunacy
I've ever heard.
Even if you're *right*, it's still Pascal's wager: you seem to be
assuming that a morality *you* find acceptable is the universal
truth, as Pascal assume that it was only *his* god that the wager
was relevant for.
Looking around me, I see substantial evidence that if there is *any*
universal morality, it is only "nature red in tooth and claw"
natural selection.
If so, and you are right, it would be impossible to build *Friendly*
AI.
If you are positing both what I said above *AND* that the universal
morality in question happens to be one that *you* would find
"friendly", even though you believe yourself to be below the "must
be in line with universal morality" boundary, well, that's beyond
a simple crazy idea into anthropocentrism of the worst kind.
I support your decision not to debate this, firmly. When you've got
math, let us know.
-Robin
-- http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/ Reason #237 To Learn Lojban: "Homonyms: Their Grate!" Proud Supporter of the Singularity Institute - http://intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT