RE: SIAI's flawed friendliness analysis

From: Ben Goertzel (ben@goertzel.org)
Date: Sat May 17 2003 - 11:35:46 MDT


To me, the real problem of Friendly AI is, in non-rigorous terms: What is a
design and teaching programme for a self-improving AI that will give it a
high probability of

a) massively transcending human-level intelligence
b) doing so in a way that is generally beneficial to humans and other
biological sentients as well as to itself and other newly emergent digital
sentients

?

Disputes over details aside, I think this is pretty much what Bill Hibbard
and Eliezer are also talking about....

Many subsidiary issues arise, such as "how high is a high enough probability
for comfort", "what does 'generally beneficial' mean?", and so forth.

I don't pretend to have an answer to this question. I hope to participate
in working one out perhaps 3-5 years from now, when we (if all has gone well
in the meantime) have a baby Novamente that's actively experiencing and
learning from its environment and learning to communicate with us and guide
its own actions...

Ben G

  -----Original Message-----
  From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Philip
Sutton
  Sent: Saturday, May 17, 2003 9:52 AM
  To: sl4@sl4.org
  Subject: Re: SIAI's flawed friendliness analysis

  Ben,

> Unfortunately though, as you know, this doesn't solve the real problem
> of FAI ... which you have not solved, and Hibbard has not solved, and I
> have not solved, and even George W. Bush has not solved ... yet ...

  Can you spell out what you think the 'real problem of FAI' that hasn't
been solved yet is, in a format that might make it easier for people to
create a solution?

  Cheers, Philip



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT