From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Wed Nov 16 2005 - 15:54:46 MST
Compiling reliable estimates of "group behaviour" risks is necessary in deciding whether or not to activate an AGI of type #3 below. The AGI in question must be more likely to enact a favourable world than would be a world without the AGI.
Bioterror existential risks are overstated because they require a "doomsday rapture" mindset to carry out, not merely contemplation of suicide. The infrastructure to manufacture germs will be easier to attain in the years ahead, so this will ever so slightly lower the required safety threshold required for unleashing an AGI over time, until other pandemic countermeasures are devised. The inertia in government weapons research programs suggests MNT arms-races are a very real possibility. Nano-terror won't happen. Terrorists won't be capable of manufacturing the 1st nano-assembler, and if assemblers ever become available at the level black-market nukes currently are, we will already be killed or enslaved by more skilled assembler handlers.
>"IS complex emergence necessary for AI"
From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Tue Sep 20 2005 - 04:08:11 MDT
<SNIP>
2. Thus the SIAI has the design requirement; goal system trajectory must
reliably stay within certain bounds, which is to say that the optimisation
targets of the overall optimisation process must not drift out of a
certain region. This is a very specific and limited kind of predictability;
we don't need specific AI behaviour or cognitive content. I agree that the
task would be impossible if one were trying to predict much more than just
the optimisation targets. I am happy to have all kinds of emergence and
Complexity occuring as long as they stay within the overall constraints,
though theory and limited experimental experience suggests to me that there
will be a lot less of this than most people would expect.
3. If that turns out to be impossible, then we'd agree that AGI development
should just go ahead using the best probabilistic methods available (maybe;
it might make sense to develop IA first in that case). But we shouldn't
write something this important off as impossible without trying really
hard first, and I think that many people are far too quick to dismiss this
so that they can get on with the 'fun stuff' i.e. actual AGI design.
Michael Vassar <michaelvassar@hotmail.com> wrote:
<SNIP> For all of our
arrogance, most Transhumanists grossly overestimate the abilities of
ordinary humans. This is substantially a consequence of how folk psychology
works, and fails to work for outliers, but also a consequence of typically
limited and iscolated life experience. Unfortunately, it has serious
consequences when predicting the future. Our estimates of the likely
behavior of large scale groups, the effort that will be devoted to a
particular research objective, or the time until some task is accomplished
are all grossly distorted. For many transhumanists this means that boogie
men such as "terrorists" are imagined as something that never was,
disutility maximizers, and the resultant threats of bioterror and nanoterror
are overestimated by many orders of magnitude. For almost all
transhumanists this means an underestimation of inertia, leading Chris
Phoenix's fears of pre-emptory arms races and Nick Bostrom's utopian dreams
of world government and benign regulation of dangerous tech. At SL4, it
probably means a serious overestimate of the immediacy of the existential
risk associated with more powerful hardware.
---------------------------------
Yahoo! FareChase - Search multiple travel sites in one click.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT