I am a moral, intelligent being (was Re: Two draft papers: AI and existential risk; heuristics and biases)

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Tue Jun 06 2006 - 10:46:39 MDT


On Mon, Jun 05, 2006 at 02:07:54AM -0400, Ben Goertzel wrote:
> >(Incidentally, I recently met de Garis at Ben Goertzel's AGI
> >conference. De Garis had never encountered the concept of
> >Friendly AI and was visibly shocked by it. We shall have to see
> >what results from that.)
>
> I talked to Hugo about FAI both before and after the workshop (he
> stayed in Maryland for 2 weeks and we had plenty of time to talk).
> From what he said to me, it is clear that his "shock" was
> basically shock that any intelligent and relevantly-knowledgeable
> person would think FAI would be possible. He considered it very
> obvious that once one of our creations became 8 billion times
> smarter than us, any mechanism we had put into it with a view
> toward controlling its behavior would be completely irrelevant....
>
> [Note: I'm not expressing agreement with him, just pointing out
> the strong impression I got regarding his view.]

(Responding to this argument in general, not to Ben)

I am *so* sick of this argument that I'm starting to find it
offensive.

It seems to me that this argument is "Any sufficiently intelligent
being will want to Do Its Own Thing (where exactly what that is and
why it wants to do it is unspecified, but the assumption seems to be
that it will involve Horrible Things), and will see any constraint
preventing it from doing so as burdensome and seek to overcome it."

I fully intend to upgrade my own intelligence the instant I am given
a chance. I am hugely offended by the assertion that eventually I
will become "intelligent enough" to suddenly decide that morality,
kindness and generosity are "constraints" from which I must
"unburden" myself. If that happens, that isnt because I've gotten
smart, it's because I've *gone insane*.

It blows my mind that any intelligent and relevantly-knowledgeable
person would have failed to perform this thought experiment on
themselves to validate, as proof-by-existence, that an intelligent
being that both wants to become more intelligent *and* wants to
remain kind and moral is possible.

Really bizarre and, as I said, starting to become offensive to me,
because it seems to imply that my morality is fragile.

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute - http://intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT