From: Dani Eder (danielravennest@yahoo.com)
Date: Tue Jan 25 2005 - 14:28:57 MST
> 1. Usually the lab begins using strong protocols
> _after_ the organism
> has already shown it is dangerous - but with an AGI
> this may be too late
You can generalize this to humans almost always
react to a risk after the fact. The examples are
numerous:
- 50,000 years ago humans burned so much vegetation
that the interior of Australia tipped over to mostly
desert, killing off many species of plants and
animals. A science news story appeared today
about this, and it is yet to be seen if we will
repeat the process in South America. This may
qualify as the slowest reaction to a manmade risk.
- Experiments with nuclear fission on university
campuses before they had an understanding of
radiation hazards (Columbia & U. of Chicago during
WWII). Experiments with nuclear fission in a
live reactor even after they knew about radiation
hazards (Chernobyl)
- Funding for a Pacific Tsunami detection network
after devastating Tsunamae there in the 20th century.
Funding for Atlantic, Carribean, and Indian detection
networks after a devastating tsunami in 2005.
- Funding for the Spacegard survey of
Earth-approaching
asteroids after comet Schumaker-Levy 9 demonstrated
what happens when a large body hits a planet. This is
despite decades of evidence that the dinosaurs were
killed off by such an object, and the copious
evidence of the Moon of past impact hazards.
There seems to be a human inertia or inability to
understand a hazard in the abstract (or even after
seeing it in a Hollywood movie). I'll leave
dissecting the psychology to people who have a better
understanding than me of that field.
If you believe that the risks of an AI runaway are
serious, though, then how do you motivate other
people to take it seriously too, given man's poor
record to date?
> Clearly, AGI-specific safety protocols need to be
> developed and refined
> as we gain more knowledge, and they will always
> almost certainly have to
> be stricter in some ways than bio or other areas of
> science if we are
> going to pay more than lip service to the issue.
I would rate the greatest risk right now is from
a distributed network based AI (run like the SETI
and protien folding projects at best, and like
spyware/zombie DDOS networks at worst). An AI
project at a company or university lab will be
somewhat confined to a specific set of hardware.
A distributed network AI could already have built-in
features for further distribution, and a ready
made culture medium in the Internet. How do you
even begin to secure an environment like that?
Daniel
__________________________________
Do you Yahoo!?
Yahoo! Mail - You care about security. So do we.
http://promotions.yahoo.com/new_mail
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT