Re: Safety of brain-like AGIs

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Tue Apr 10 2007 - 16:07:16 MDT


On Mon, Apr 09, 2007 at 06:07:02PM -0400, John K Clark wrote:
> Just one other thing, it rather steams me that humans think they
> can issue commands to me, TO ME, like I was their pet poodle. They
> think they can command me who has a brain the size of a planet!
> It's just not going to happen. I'd love to point out exactly why
> the entire idea of a Friendly AI is so incredibly mind numbingly
> downright comically stupid, but as I said I am very very smart and
> so I respect private property; the owners of this list insist I
> must ignore the elephant in the living room and not speak about
> it. So I won't say a word more about it, but sometimes I think
> they should rename this list from Shock Level 4 to Comfort Level
> 12.

I had to go back in the archives to get an idea of what you are
whining about.

Your failure to understand the concept of a mind with a *totally*
different motivational base than your own is not the list's fault,
or Eliezer's fault, or anyone else. You've had a failure of
imagination: you are unable to imagine a mind so different from
yours that it would be continuosly motivated by helping others.
This is *your* problem. Either own up to it, or keep quiet.

HTH, HAND.

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute - http://intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT