From: pdugan (pdugan@vt.edu)
Date: Thu Jul 21 2005 - 15:41:10 MDT
>===== Original Message From Tennessee Leeuwenburg <hamptonite@gmail.com>
=====
>I am increasingly of the opinion that humans are unqualified to
>recognise or properly analyse AGI. I think we need an intermediate
>step, such that we can improve our guesses.
I think this resolution is the finest pearl to come from these discussion,
which like actual boxing, can last for many rounds with only brain damage as a
result. Tennesse asks how to engineer a succesion of intelligence so that we
can make reasonable, risk-marginalized desicions regarding humanity's future.
While getting lucky on a merely Transhuman mind Friendly enough to help blaze
the trail is certainly one option, and not a bad one provided we do get lucky
on that F-word being robust, I think theres a better one, one that can be
described in much more egalitarian terms. Ben's essay on a positive
transcension suggested the idea of a Singularity Steward, and expounded by
suggesting that a "Global Brain" or telekinetic internet, could induce
transhuman intelligence in humanity's collective efforts (though maybe not our
Collective Volition, ba dum ching). Certainly an increase of intelligence
based on free exchange of thought and experience would be safer than a
transhuman intellectual singleton. Is the desirability of this path a
principle the list can agree on?
Patrick
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT