From: pdugan (pdugan@vt.edu)
Date: Fri Dec 16 2005 - 08:49:33 MST
>Some of us think that one possible solution to the problem of
>unfriendly AIs is to aggressively augment and amplify the intelligence
>of humans--and more importantly, the intelligence of human social
>organizations composed of augmented humans--such that we have a broad,
>powerful, and evolving base of intelligence based on human values in
>place to deal with the threat of unfriendly AIs. Society is already
>proceeding down this broad path, but certainly not with any sense of
>urgency.
The important question then, is what are human values?
I'm personally unsure if "human values" are anything other than figments of
our imaginations.
>On the other hand, some of us think that the risk of unfriendly AI is
>so great in its consequences, and possibly so near in time, that
>humanity's best chance is for a small independent group to be the
>first to develop recursively self-improving AI and to build in
>safeguards which, unfortunately, have not yet been conceived or
>demonstrated to be possible. I don't disagree with this thinking, but
>I assign it a very small probability of success because I think it is
>vastly outweighed in terms of military and industrial resources that
>can and will pick up the project when they think the time is right.
>
This is a pragmatic prediction, considering the amount of funding and
research DARPA and related U.S. government based agencies have pursued in the
recent past. The majority of it isn't quite on focus with an SL4 worldview,
but given an acceptance of certain axioms its likely hardcore IA would become
a main perogative.
>My (optimistic and hopeful) bet is on Google to be prominent in both
scenarios.
>
>- Jef
A lot of people think that Google will be important. Incidentally, one of
the key events in an interactive storyworld I'm planning will involved an AGI
absorbing Google and inheriting a massive incrase in frame of reference.
Patrick
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT