From: Stuart, Ian (Ian.Stuart@woolpert.com)
Date: Wed May 18 2005 - 12:51:00 MDT
>While messing about with evolved NNs is by no means safe, it
>is one of the AGI approaches least likely to produce a hard takeoff, and
>in any case I doubt this lot are going to do anything radically novel or
>effective with them. As such I wouldn't place this anywhere near the top
>of the list of things to be concerned about.
Thank you for the prompt response. I agree with you totally that this
particular project probably does not have what it will take to achieve
general cognition; however, I believe that any project in which cognition is
a possible outcome should be monitored at the very least (probably all that
is necessary in this case) and approached should it appear that progress is
being made. Unfortunately I don't know how to tell, from the outside of a
project, when or if progress is made. As Mr. Yudkowsky frequently points
out, the time to go from the "We have succesfully simulated a human brain in
a computing substrate." announcement to "Oh My God, Singularity!!" is
possibly vanishingly small. I have done some searching of the SL4 archives
and did not immediately find a policy suggestion for dealing with non-SIAI
AGI projects. There is some mention on the SIAI website of loaning out
programmers to consult and signing non-disclosure agreements, but,
regardless of how weak an attempt this turns out to be, if someone starts on
the path to AGI without taking friendliness into account, is there an
intervention process in place?
>Are you involved with the UK transhumanist/Extropian crowd?
No. I am from Cincinnati, OH in the states, I just happened to run across
the story while perusing tech news.
This archive was generated by hypermail 2.1.5 : Sun May 19 2013 - 04:01:10 MDT