From: Brian Phillips (deepbluehalo@earthlink.net)
Date: Sat Jun 30 2001 - 18:16:39 MDT
----- Original Message -----
From: Eliezer S. Yudkowsky <sentience@pobox.com>
To: <sl4@sysopmind.com>
Sent: Saturday, June 30, 2001 6:25 PM
Subject: Re: Putting super-intelligence in a body
> Jack Richardson wrote:
> >
> > I'm not sure we can assume that an enhanced computer necessarily will be
the
> > mechanism through which a transhuman AI arises. On the other hand, a
> > friendly AI might well be needed to protect us from ourselves. It may be
> > that an advanced AI can be developed in the next ten years, but soon
after
> > that, rapidly enhancing humans will begin to catch up.
>
> Jack, you're being far too conservative. You are, as Kurzweil will
> probably be saying five years from now, thinking exponentially instead of
> asymptotically. Anyway, I don't know whether you've read "Staring into
> the Singularity" (sysopmind.com) or "What is Seed AI?" (intelligence.org) but
> in either case, the moral of the story is that transhumanity very very
> rapidly enhances itself up to superintelligence. If a transhuman AI is
> developed in the next ten years, then ultraintelligent AI is developed in
> the next ten years.
Big If. Always that IF crops in. :) The problem is software. That's
why we should learn as much as possible about the only working
"intelligence" we have access to. It's perfectly possible that we will
have to reverse-engineer sentience after a fashion. The mad hardware
may make the process of Understanding our own software doable..
and the Human Genome Project gets followed up (quickly) by the
Human Brain Project...which is then (using the hardware of the next
2-3 decades) followed even more quickly by the Human Mind Project.
At which point the models and the original templates and the revised
improved versions achieve Singularity. En masse. Until someone
can demonstrate code of even near infrahuman AI, reverse-engineering
has my confidence. Feel free to Future Shock me Eli!
> Humans do not even remotely begin to catch up unless
> the ultraintelligent AI wants them to catch up, in which case Friendly AI
> has been achieved and a basically healthy Singularity has been initiated
> successfully. If you live next door to a Friendly superintelligence there
> is no reason to mess around with half-measures like genetic engineering or
> neurosurgery; you are who you choose to be, whether that's a biological
> human or an uploaded humanborn SI.
Would you like to be a biological human or an uploaded humanborn SI?
Answer: Yes. I would also like to be a non-human born SI. All at the same
time. :)
Which is why I say neurosurgery (including microsurgical techniques and
advances in neuroradiology, neuropathology and basic neuroscience) is
a good place to be. Certainly a surer (if such a word is even appropriate
to
this discussion) path to understanding of "a" set of working code. Which
can of course be used to duplicate and improve.
Eli, what is the difference between an adept Expert System specializing in
investigation and correlation of the nature of human sentience (and then
using that understanding to recode itself) and your conception of
a seed AI? Beyond the Friendly issue. Wouldn't they be very similar in
eventual outcomes if both worked?
regards,
Brian
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT