RE: SIAI's flawed friendliness analysis

From: Ben Goertzel (ben@goertzel.org)
Date: Thu May 29 2003 - 15:10:21 MDT


I think that Eliezer and Bill are interpreting the term "human happiness"
differently. I think Eliezer is assuming a simple pleasure-gratification
definition, whereas Bill means something more complex. I suspect Bill's
definition of human happiness might not be fulfilled by a Humanoids-style
scenario where all humans are pumped up with euphoride, for example ;-)

I'm not necessarily taking Bill's side here -- I don't think that "human
happiness" in any reasonable definition is going to be the best supergoal
for an AGI -- but, I suspect Bill's proposal is less absurd than it seems at
first glance because of his nonobvious definition of "happiness"

ben g

  -----Original Message-----
  From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Philip
Sutton
  Sent: Thursday, May 29, 2003 5:02 PM
  To: sl4@sl4.org
  Subject: Re: SIAI's flawed friendliness analysis

  Dear Eliezer,

  In reply to Bill Hibberd you said:

> Congratulations. You've just ruled out SIAI's Friendly AI architecture
> and mandated one that is basically, fundamentally flawed.

  Following this there was quite a bit of text saying how silly you thought
Bill's ideas were.

  But I can't see anything in your email that substantiates your starting
proposition. *Why* did Bill's proposal mandate a flawed approach to FAI?

  I would like you to explain why in language that a non-mathematican can
understand. If you can't get around to explaining your ideas in a form that
an intelligent, informed non-mathematician can understand then you are
commiting yourself to fail to communicate with the people you want to
pusuade not to adopt Bill's approach. And so if this failure continues then
people could justifiably say that *you* "have one subjunctive planetary kill
on your record".

  Cheers, Philip



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT