From: Randall Randall (randall@randallsquared.com)
Date: Tue Aug 02 2005 - 06:07:04 MDT
On Aug 2, 2005, at 5:29 AM, Daniel Radetsky wrote:
> On Tue, 2 Aug 2005 02:44:43 -0400
> "Ben Goertzel" <ben@goertzel.org> wrote:
>> So you're right. The argument from the known (empirical and
>> conceptual)
>> incompleteness of physics is only PART of my reason for believing a
>> superhuman AI could find a box-exploit.
>
> But my point is that the incompleteness of physics provides next to no
> support
> for the existence of exploits.
The point is, though, that "exploits" is shorthand here
for "tricks that it is extremely improbable that a being
of human intelligence would ever think of".
Given this, in order to show that it is unlikely that
such exploits exist, you have to show that there are no
areas in which our model of physics is very incomplete,
*and* that our model of physics is simple enough that
the consequences of it fully understandable by humans.
Conversely, in order to show that the likelihood of
exploits is, at least, unknown, all Ben and Eliezer
have to show is that either our model of physics is
complex enough that the consequences of an action are
not fully captured by a human understanding of it, or
that there are areas in which our model of physics
does not appear to be consistent, or explain observed
behavior.
When you say that the helicopter/exploit depends on
the existence of helicopters, you're right, but that's
irrelevant! We're in the position of the dog, here,
not the person who calls for the helicopter. There is
no way for a dog to know whether a helicopter he has
never seen before might show up, and no way for him to
reason about the possibility. He can't show that
helicopters are possible without an example, but that
has no constraint on likelihood of rescue for the
"trapped" human.
If members of the dog pack were to sit around discussing
(pardon the conceit) whether a cornered human could get
away via some undogly exploit, an argument that there
was no known way for a human to get away in any dogly
understanding of the situation would not be a correct
argument that the human could not, in fact, get away, as
we know.
In any case, a statement that the incompleteness of
physics is not an argument that exploits are possible
implies that the stater has delineated the boundaries
of the incompleteness, and has some procedure to rule
out exploits within those boundaries. But if that were
the case, the areas of incompleteness would be within
the model, rather than outside it.
In summary, even if there were an apparently complete
model of physics, and we could work out the consequences
of any given action to an indefinite precision and time,
we would still not have a guarantee of no exploits. But
given that we know of areas that are incomplete, and
given that we know that it's possible to surprise us even
within ostensibly complete areas, it should be obvious
that exploits must be assumed to exist for intelligences
which dwarf our own (if that's possible).
-- Randall Randall <randall@randallsquared.com> "Lisp will give you a kazillion ways to solve a problem. But (1- kazillion) are wrong." - Kenny Tilton
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:00 MST