From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Wed Oct 27 2004 - 00:20:51 MDT
--- Slawomir Paliwoda <velvethum@hotmail.com> wrote:
> Not when you start thinking about the consequences
> of the cause you are
> supporting. SIAI failing to build safe and humane SI
> is not the worst thing
> that can happen. The worst thing that can happen is,
> actually, SIAI
> succeeding at making SI that would later turn
> Unfriendly. It makes sense to
> support the cause when it is shown how the project
> won't lead to UFAI in a
> way potential donors can understand. In absence of
> comprehension, the only
> thing left is trust.
>
That's another point possibly deterring potential
donars. If Sing Inst is capable of actually 'saving
the world', the flip side is that they are also
capable of actually destroying it if their approach
fails.
Someone donating to Sing Inst may actually be
*accelerating the destruction of the world* if Sing
Inst creates an UFAI.
You can understand the dilemna of someone who agrees
with basic Singulatarian concepts, but has serious
doubts about the specific approach of Sing Inst...
For instance take my stubborn claims:
*No pratical (real-time) general intelligence without
sentience is possible
*No completely selfless AI is possible
*Collective Volition is impossible for Singleton AI to
calculate and can't be imposed from the top-down
What should I do?
(a) Trust that Eli is right and I'm simply mistaken?
(b) Stick to my guns and doubt Eli?
(c) Snap and start scribbling incomprehensible
diagrams and spouting gibberish?
=====
"Live Free or Die, Death is not the Worst of Evils."
- Gen. John Stark
"The Universe...or nothing!"
-H.G.Wells
Please visit my web-sites.
Sci-Fi and Fantasy : http://www.prometheuscrack.com
Mathematics, Mind and Matter : http://www.riemannai.org
Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT