From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Sep 14 2001 - 12:31:26 MDT
Brian Atkins wrote:
>
> Well to be clear, he was not apparently misquoted on "computers taking
> over", he was misquoted in his proposed solution to that perceived problem.
>
> Not that I see how it makes one whit of a difference as to whether the
> "computers" take over if they are attached to your mind or run separately.
> Perhaps what he meant to express was that he wants humans with augmented
> intelligence rather than AIs. He still comes across as someone who hasn't
> fully grasped the situation.
I think he unambiguously expressed that he wanted humans with augmented
intelligence rather than AIs, and that he considered "silicon neurons"
connected to biological neurons to be instances of augmented humans rather
than augmented computers. I suppose this is realistic as long as the
overall mind starts out with the cognitive architecture and emotional
makeup of the human core. Perhaps he would feel the same way about
uploaded humans.
I have no objection to uploaded humans or computer-augmented humans,
unless interim experience with Friendly AI shows that FAI is not only *as*
likely but actually substantially *more* likely to produce an altruist. I
see Friendly AI as our best chance, and the most important variable,
because I think AI will substantially proceed uploading or even real
computer augmentation.
There are a lot of very powerful, unambiguous reasons to pursue Friendly
AI. The untrustworthiness of human uploads, however, is not one of them.
It looks to me like if you take the human emotional core and gradually
increase intelligence and self-awareness, the end result should be
altruistic in most cases (at least). It could be that interim results in
Friendly AI will unambiguously show that FAI works, in which case the
burden of proof shifts to the human-pathway advocates, but that hasn't
happened yet.
The essential flaw in the debate as conducted by both Hawking and Kurzweil
is the concept that pure nonbiological AIs are necessarily the enemy - or
at least the Other, a different species with different interests. The
underlying anthropomorphism is the expectation that a nonbiological
organism will have an observer-centered goal system (note:
"observer-centered" != "observer-dependent"). You can pursue a human
Singularity through purely nonbiological substrate; in fact, that verges
on being the definition of Friendly AI.
Once you accept that it's not carbon versus silicon, and that you can get
the same Singularity via AI, you can take an unbiased look at the relative
technological rates and realize that AI comes first. The question is
whether it's a Friendly AI developed by a Singularity-aware project,
whether humanity is wiped out by biological or nanotechnological weapons
during the current window of vulnerability, and how many fatalities
humanity suffers in the interim period.
I'd take a biological Singularity if I could get one, and would expect the
augmented humans to turn right around and develop a Friendly AI - at
least, that's what I'd expect if they were ethical. But if Kurzweil is
correct in expecting human enhancement in 2030, and AI first becomes
feasible in 2010, then human enhancement is as irrelevant as genetic
engineering. I don't expect to get a biological Singularity, and I think
a nonbiological Singularity is just as good, so I concentrate on
nonbiological Singularities. And I think that if you start fighting over
whether you want a biological or nonbiological Singularity, then humanity
wipes itself out while you're bickering, or an unFriendly AI is developed
first because all the Singularity-aware Friendly AI projects have been
shut down.
The important thing is to get to *some* positive Singularity as fast as
possible.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT