From: Tennessee Leeuwenburg (hamptonite@gmail.com)
Date: Sun Aug 21 2005 - 19:43:39 MDT
I can't help thinking that the outcomes of these experiments simply
reflect the beliefs of the person or people involved. I find it hard
to imagine that someone acting with a presumption of suspicion against
the AI involved would ever let it out of the box, and hard to believe
that anyone with a presumption of honesty towards the AI would ever
fail to let it out of the box.
If the "box" test is a little like the turing test in that it reflects
the biases of those involved, what we may really be testing is whether
people ultimately *want* to live in a post-singularity world or not.
It strikes me that many people would be capable of simply distrusting
the AI, trusting it but still not choose to let it out of the box, or
otherwise cause it to "fail" the test.
Maybe I am mis-understanding something about the nature of the scenario.
Cheers,
-T
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT