RE: large search spaces don't mean magic

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Aug 01 2005 - 23:19:26 MDT


Hi Daniel, etc.

> But I'm not talking about the magic of sweet talking the jailer
> into letting
> the AI out. That's another question. I was talking about the
> belief aired by a
> number of members of this list that there is reason to believe
> that features of
> the laws of physics might be exploited by a boxed AI. For
> example, Goertzel
> seemed to suggest that he believed an AI might be able to use quantum
> teleportation to escape the box. Perhaps I am misrepresenting
> him, but I think
> this a fair example of the kind of belief that many people on
> this list hold.

I'm currently vacationing in a fairly undeveloped area and only have email
access every few days so I can't take part in a dialogue right now.

However, since I've been mentioned I'll chip in a little bit here...

My statement was that I believe an AI with superhuman intelligence may well
be able to find SOME method for escaping from a box we think is impregnable.
I may have mentioned "quantum teleportation" as one particular possibility,
but if this possibility is proved impossible, that doesn't affect my point
at all.

On this particular topic, I seem to be in pretty close agreement with
Eliezer.

I think there is a reasonably good analogy with a dog who has cornered his
human victim and thinks there's no way the human can escape -- until the
human is rescued by a helicopter, something the dog has never seen or
imagined, and which lies outside the world-model of the dog.

We may prove a box is impregnable based on our "physical laws", but these
so-called laws are not absolute truth, they're just hypotheses that we've
made in order to fit the data we've collected and observed....

> I agree that we should accept
>
> 1. Our theory is not the final ultimate theory of everything.
>
> 2. It is possible that there exists a box-exploit in physics.
>
> What I disagree with is that
>
> 3. It is likely that there exists a box-exploit, and furthermore
> a box-exploit
> which is reasonable under circumstance C.
>
> The fact that there could be a mind before which we are dog-like in our
> intellectual capacity doesn't mean that (3) is true. It just
> means that if (3)
> is true, then it's more likely that the mind will find the
> exploit. Also, the fact
> that we have been wrong a lot in physics in the past does not
> support (3), only
> (1) and (2).

I agree that assessing the probability that there exists a box-exploit is
something that we don't currently know how to do in a rigorous way.

So, the assessment of this probability, at present, comes down to
qualitative and intuitive considerations.

This doesn't mean that plausible, rational arguments can't be made (even if
they fall short of full scientific rigor), of course.

But the only arguments I know how to make to bolster my intuition that
box-exploits are reasonably likely, are complex arguments that depend on
detailed arguments about specific places where I think modern physics may be
incomplete. This is a topic better reviewed in a long technical paper than
a casual email...

So for now, I agree that no rigorous arguments have been made in favor of 3,
only some fairly vague intuitive ones.

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT