Re: Uploading with current technology

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Dec 08 2002 - 22:11:00 MST


Gordon Worley wrote:
>
> On Sunday, December 8, 2002, at 01:08 PM, Ben Goertzel wrote:
>
>> http://users.rcn.com/standley/AI/immortality.htm
>>
>> Thoughts?
>>
>> Can anyone with more neuro expertise tell me: Is this guy correct as
>> regards what is currently technologically plausible?
>
> The Singularity and, specifically, FAI is a faster, safer way of
> transcending. Super *human* intelligence is highly dangerous. Think
> male chimp with nuclear feces. Unless you've got someone way protect
> the universe from the super *humans*, we're probably better off with
> our current brains.

FYI: This is not currently the view of the Singularity Institute. (As
far as I know. I'm just one Director, after all.) I can see ways for
human enhancement to go wrong, but I can also see ways for it to go right;
in that highly abstract respect it's not much different from FAI. *What*
might go wrong is to some extent different.

Given the enormously greater effort I've put into thinking up safe FAI
strategies, versus safe human enhancement strategies, even if FAI did
"look safer" it might not count for much; maybe putting an equal amount of
computing power into coming up with human enhancement strategies would
yield equally good or even better strategies.

I for one would like to see research organizations pursuing human
intelligence enhancement, and would be happy to offer all the ideas I
thought up for human enhancement when I was searching through general
Singularity strategies before specializing in AI, if anyone were willing
to cough up, oh, at least a hundred million dollars per year to get
started, and if there were some way to resolve all the legal problems with
the FDA.

Hence the Singularity Institute "for Artificial Intelligence". Humanity
is simply not paying enough attention to support human enhancement
projects at this time, and Moore's Law goes on ticking.

It would require less computing power to make sense of reality in general,
and complex Singularity strategies in particular, if all the positive
support ended up on one side, and all the negative support ended up on the
other side. But that kind of totally clear-cut choice usually only
develops in issues of fact, in the natural sciences, a decade after the
initial controversy is over. It doesn't often characterize complex
planning in real-world scenarios. Sometimes, yes, but not often.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT