Re: Ben vs. Ben

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jun 30 2002 - 13:01:36 MDT


Ben Goertzel wrote:
> E.g. I know Peter Voss thinks it's just way too early to be seriously
> talking about such things, and that he's said as much to Eliezer as well...

You and Peter have different AIs. Peter and I have our differences on
cognitive science and Friendly AI, but I think we have pretty much the
same take on the dynamics of the Singularity and the moral
responsibilities of a seed AI programmer. If Peter says his AI doesn't
need a controlled ascent feature yet then it's because his current code
is doing some experiential learning, but it's not strongly recursive -
it doesn't have Turing-complete structures modifying each other.

Also, Peter is in general much more of a perfectionist. He might not
always agree with me on what constitutes a problem, but if he sees a
problem I would expect him to just fix it. In general, you strike me as
someone who needs a strong reason to fix a problem and Peter strikes me
as someone who needs a strong reason *not* to fix a problem, so if *even
Peter* thinks his AI doesn't need a controlled ascent feature, it
probably doesn't.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT