From: Daniel Burfoot (daniel.burfoot@gmail.com)
Date: Sun Mar 23 2008 - 19:34:12 MDT
On Mon, Mar 24, 2008 at 8:26 AM, William Pearson <wil.pearson@gmail.com>
wrote:
> These are the people that have secrets upon secrets. They will just use
> your code, secretly.
>
This is of course true, which is why I noted that the problem is "deep and
difficult". Nevertheless, I feel personally that the problem should be
addressed somehow by the scientific community. If the only effect of the
non-military use clause is to raise awareness, that seems worthwhile to me.
> The two ways you can go are is hope that friendly singletons can get
> from human to hyperhuman intelligent quickly (one big genie), or
> spread the secret of singleton resisting AI as far and as fast as
> possible (many small genies with many competing goals).
My view on the subject, perhaps in contrast to the prevailing list
sentiment, is that this form of strong general AI is not likely in the
immediate future (15 years, say). In my view, narrow AI (computer vision,
speech recognition, mobile robotics) is much more likely. Furthermore, these
technologies will provide governments enormous powers of coercion and
surveillance over their citizens. In contrast, while the new narrow AI
technology will also be available to the citizenry, it will not provide
them/us with a comparable ability to resist government intrusion.
Dan
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT