From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jun 30 2001 - 16:25:03 MDT
Jack Richardson wrote:
>
> I'm not sure we can assume that an enhanced computer necessarily will be the
> mechanism through which a transhuman AI arises. On the other hand, a
> friendly AI might well be needed to protect us from ourselves. It may be
> that an advanced AI can be developed in the next ten years, but soon after
> that, rapidly enhancing humans will begin to catch up.
Jack, you're being far too conservative. You are, as Kurzweil will
probably be saying five years from now, thinking exponentially instead of
asymptotically. Anyway, I don't know whether you've read "Staring into
the Singularity" (sysopmind.com) or "What is Seed AI?" (intelligence.org) but
in either case, the moral of the story is that transhumanity very very
rapidly enhances itself up to superintelligence. If a transhuman AI is
developed in the next ten years, then ultraintelligent AI is developed in
the next ten years. Humans do not even remotely begin to catch up unless
the ultraintelligent AI wants them to catch up, in which case Friendly AI
has been achieved and a basically healthy Singularity has been initiated
successfully. If you live next door to a Friendly superintelligence there
is no reason to mess around with half-measures like genetic engineering or
neurosurgery; you are who you choose to be, whether that's a biological
human or an uploaded humanborn SI.
Incidentally, another web page you might want to take a look at is "Future
Shock Levels", also at sysopmind.com.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT