RE: An essay I just wrote on the Singularity.

From: Yan King Yin (y.k.y@lycos.com)
Date: Wed Jan 07 2004 - 13:56:06 MST


>> I'm afraid the singularity won't happen because it
>> is impossible to formulate Friendliness *and* to
>> compute it with reasonable resources *and* to
>enforce
>> it physically.
>> Recursive self-improvement, however, is possible.
>> Just my 2 cents for now...Cheers,YKY
>
>Ummm.... So no superintelligent being can ever be
>Friendly? That kind of assertion requires some sort of
>Bayesian evidence to be viable.

It depends on how the singularity is defined: If it
means the emergence of AI's then it is possible and
actually highly probable. "friendliness" is also
necessary but it will not be a god-like Friendliness,
more like a utilitarian, practical friendliness.

I'm just suggesting that people should stop
considering some of the more excessive ideas such as
the sysop or similar scenarios, of a single AI sucking
up all resources. This will not lead to paradise
because human beings will remain a diverse population
that compete with each other for resources. The single
AI will only transfer that struggle from human society
to the AI. Everyone will instantly turn into kids
vying for Santa's favor. Surely fun to watch, but it
doesn't solve any problems. Alternatively the SAI can
strip everyone of their privileges and make us all
equal, but this is highly unlikely to emerge out of
current conditions.

We can discuss this in the #SL4 chat in a few hours=)
YKY

____________________________________________________________
Get advanced SPAM filtering on Webmail or POP Mail ... Get Lycos Mail!
http://login.mail.lycos.com/r/referral?aid=27005



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT