RE: AI boxing

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Jul 20 2005 - 23:30:19 MDT


> > How can you assess the probability that a superhuman AI will
> develop a novel
> > theory of unified physics (that no human would ever be smart
> enough to hit
> > upon) and figure out how to teleport out of its box?
>
> I can't, but I submit that no one on this list has any basis to assess the
> probability either. So if I claim that the probability is
> infinitesimal, then
> your only basis for disagreement is pure paranoia, which I feel
> comfortable
> dismissing.

You are wrong -- my basis for disagreement is not pure paranoia.

I'm not a very paranoid person, in fact.

My basis for disagreement is my study of the history of science, and my
intuition that modern physics leaves a lot of very important things
undiscovered. In fact, I don't doubt that teleportation *is* possible once
one understands quantum gravity well enough.

It is true that my estimate that the probability is much greater than
infinitesimal is based on nonrigorous intuitions rather than rigorous
arguments, but it is NOT true that my estimate is based on "pure paranoia"
...

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT