From: Christian L. (n95lundc@hotmail.com)
Date: Wed Jul 04 2001 - 08:30:36 MDT
Jack Richardson wrote:
>
>At the same time, some of those involved in developing AI have the
>optimistic view that the Singularity will arise in the relatively short
>time of ten to twenty years. During that same period, the methods of
>augmenting humans will develop rapidly and become much more sophisticated.
>
If by "augmenting", you mean things like retinal scanning glasses, smooth
and flexible wearables with broadband internet connections then I can agree
with you. But if you mean more intrusive technology such as implants, then I
am more skeptical. It might be technically feasible, but the moral panic
from the "bioethicists" would make it impossible to augment humans in this
fashion. Animals maybe, but not humans in such a short timeframe. Remember:
cloning is "morally repugnant and against human dignity".
>The optimistic view that the Singularity will arise out of AI development
>on computer hardware assumes that the complexities of human intelligence
>can be replicated on machine hardware without any insoluble problems
>standing in the way. Historically, at least so far, this has not turned out
>to be the case.
>
If that were the case historically, we would have AI:s among us, no? If you
say that the case is NOT that we don't have any insoluble problems,
logically there must exist an insoluble problem in creating AI. Which
problem is it? :-)
>Since there are real risks in whatever route we take towards the
>Singularity, once it begins to be perceived as a possibility by the larger
>population,
>
I believe there is a good chance that it will never be perceived as a
possibility by the larger population.
>it is highly likely there will be a massive reaction with the kind of
>protests we are seeing today towards the biotechnology companies.
>
Yes, and on a much larger scale: the biotech companies are not a threat to
the national security of every country on earth, which incidentally
superintelligent AI is. The "massive reaction" will probably not only come
from militant luddites and anti-[favourite evil here] people, but also from
powerful governments. Since the singularity community consists of only a
handful of people, the result of a confrontation is clear.
>Without the convincing demonstration of the reliability of friendly AI
>controls,
>
This looks like the infamous "precautionary principle": If you cannot prove
that it is harmless, ban it. This principle is much liked by the luddite
community because it is logically impossible to prove that something is
harmless, so you can ban just about anything with this principle.
The moral of the story: you can never convince the world leaders that
"friendly AI controls" is guaranteed to prevent your SI from converting the
earth to computronium. And even if you can convince them that we will have a
Sysop scenario: Why would they want this? Why would they give up their power
to a machine? In the subconscious mind, the Sysop is a big fat male
competing for food and mates with the power to grab ALL food and ALL mates
for itself. Who would want such competition?
>it may be impossible to continue to conduct open AI research.
>
The AI research that has the stated goal of constructing superintelligent AI
should not be open or at the very least, not evangelized. At the moment,
only a few people take this work seriously, but as time progresses, the need
for secrecy might be more apparent.
/Christian
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT