From: Jeff Medina (analyticphilosophy@gmail.com)
Date: Fri Aug 26 2005 - 16:28:13 MDT
On 8/26/05, Tim Duyzer <tim.duyzer@sympatico.ca> wrote:
> What I wonder about the whole box/gatekeeper thing is why the gatekeeper has
> to have the power to let the AI out. If the one person who got to talk to
> the AI didn't have that power, and had to convince a human counterpart of
> why the AI had to be let out, it'd at least be a firewall of sorts. I
> haven't seen an answer to this anywhere on the list.
Having the power to let the AI out directly or through a human
intermediary are functionally indistinguishable scenarios. If a UFAI
can convince any given human to let it out with nothing but words, it
can convince any given human to convince any other given human to let
it out using nothing but words.
-- Jeff Medina http://www.painfullyclear.com/ Community Director Singularity Institute for Artificial Intelligence http://www.intelligence.org/ Relationships & Community Fellow Institute for Ethics & Emerging Technologies http://www.ieet.org/ School of Philosophy, Birkbeck, University of London http://www.bbk.ac.uk/phil/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT