From: James Higgins (jameshiggins@earthlink.net)
Date: Fri Jul 26 2002 - 09:45:55 MDT
Justin Corwin wrote:
> On a related note, I believe this experiment can be generalized to most
> humans, and should be seen as applicable even to highly intelligent and
> prepared individuals, as some of these people were, and I think this
> illustrates some universal principles.
I agree that this experiment could be generalized to most humans, but I
do not believe it is applicable to highly intelligent & prepared
individuals. You said "only 2 were transhumanists" and "1 could be
considered a nascent Singularitarian". Therefor your sample may be
indicitive of the general population but not of transhumanists,
singularitians or AI researchers (the most important).
The vast majority of the population is not equiped for such a challenge.
To make an intelligent decision in this one would, at least, need to
understand the concepts of NanoTechnology & The Singularity. If they
don't truly understand, or believe in, the potential threat they may
arbitrarily release the AI. Even if NanoTech & The Singularity were
impossible, a transhuman AI could still wreak havoc on the Internet.
You don't even qualify the level of Computer Science knowledge of most
participants (prefering to specify their religous beliefs in detail,
which (IMHO) is far less important).
You also don't specify the conviction of the participants. Were they
just playing along because they thought it was fun or had some excess
time? Did they take this seriously?
As for Eliezer's rules I do agree that the 2 hour minimum is not
realistic.
James Higgins
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT