From: Daniel Radetsky (daniel@radray.us)
Date: Tue Aug 02 2005 - 16:34:48 MDT
On Tue, 2 Aug 2005 08:07:04 -0400
Randall Randall <randall@randallsquared.com> wrote:
> Conversely, in order to show that the likelihood of
> exploits is, at least, unknown, all Ben and Eliezer
> have to show is that either our model of physics is
> complex enough that the consequences of an action are
> not fully captured by a human understanding of it, or
> that there are areas in which our model of physics
> does not appear to be consistent, or explain observed
> behavior.
But I don't disagree with this, and I don't see why you think I do. I never
said that it was not the case that the likelihood of exploits is unknown. I
agreed with this, and disputed those who seemed to take this fact to show that
it was likely that there were reasonable exploits, an obviously false
entailment.
> When you say that the helicopter/exploit depends on
> the existence of helicopters, you're right, but that's
> irrelevant! We're in the position of the dog, here,
> not the person who calls for the helicopter.
I strongly disagree. Whether we are the dog or the human, the escape is
possible iff there are helicopters.
> If members of the dog pack were to sit around discussing
> (pardon the conceit) whether a cornered human could get
> away via some undogly exploit, an argument that there
> was no known way for a human to get away in any dogly
> understanding of the situation would not be a correct
> argument that the human could not, in fact, get away, as
> we know.
You're right, but this is not my argument. The fact that there is dogly way the
human could get away means the dogs may believe that the human has a possible
way to escape, but cannot justifiably hold that there is a reasonable way to
escape.
> In any case, a statement that the incompleteness of
> physics is not an argument that exploits are possible...
NO, NO, NO! REASONABLE, not possible. That physics is incomplete is an argument
that exploits are possible, but possible isn't much help. The AI wants the
exploit to be reasonable under a particularly awful set of circumstances.
Clearly, an exploit could be possible and yet not reasonable under circumstances
C, or any circumstances at all.
> In summary, even if there were an apparently complete
> model of physics, and we could work out the consequences
> of any given action to an indefinite precision and time,
> we would still not have a guarantee of no exploits.
But we don't have to guarantee no exploits for it to be unjustified to suppose
there are any.
> given that we know of areas that are incomplete, and
> given that we know that it's possible to surprise us even
> within ostensibly complete areas, it should be obvious
> that exploits must be assumed to exist for intelligences
> which dwarf our own (if that's possible).
If you know of an "exploit" like the ones under discussion, I'd love to hear
about it. I don't think you have any good reason to suppose reasonable exploits
exist.
Daniel
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT