From: Vladimir Nesov (robotact@gmail.com)
Date: Mon Dec 07 2009 - 09:54:10 MST
On Mon, Dec 7, 2009 at 3:10 PM, Panu Horsmalahti <nawitus@gmail.com> wrote:
> It seems that you haven't read about any of the basic texts involving this
> subject, however, you're confused about what people (especially people on
> sl4 and related "memecomplexes" want to do. Their goal is to create a
> "Friendly AI", which by definition is not dangerous. And they want to create
> it quickly, because the more they wait, the more there's chance that
> something kills everyone (which is also called an existential risk).
>
As more constructive feedback to the OP: see References from these wiki pages:
http://wiki.lesswrong.com/wiki/Singleton
http://wiki.lesswrong.com/wiki/Existential_risk
http://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence
http://wiki.lesswrong.com/wiki/Paperclip_maximizer
http://wiki.lesswrong.com/wiki/Intelligence_explosion
-- Vladimir Nesov
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT