From: Gordon Worley (email@example.com)
Date: Mon Dec 04 2006 - 09:50:31 MST
I tried sending this message to the list about two weeks ago, but it
simply never made it across the Internet. I tried again and it still
failed. Maybe this time it will work, so long as no one tries to
bring Tatya into existence again.
On Nov 20, 2006, at 3:44 PM, Philip Goetz wrote:
> "It follows that we have no reason to expect any SI we deal with to
> a huge intrinsic utility to its own survival. Why? Because that's an
> extremely specific outcome within a very large class of outcomes
> where the
> SI doesn't shut itself down immediately. There is, in other words, no
> Bayesian evidence - no likelihood ratio - that says we are probably
> looking at an SI that attaches a huge intrinsic utility to its own
> survival; both hypotheses produce the same prediction for observed
> There doesn't seem to be any reason given for the conclusion. I could
> just as well say, "We have no reason to expect any animal we deal with
> to attach utility to its own survival, because that's an extremely
> specific outcome within a very large class of animals who don't commit
> suicide immediately."
But there is evidence that any animal we deal with attaches utility
to its own survival. Even if we knew nothing about genetics, Darwin,
or even basic biology, simply observation would give us evidence as
to this. We can't even observe an SI, though, because none exists,
thus we must look for other kinds of evidence, and currently we have
none (or so claims Eliezer; he's the expert, not me; I certainly have
none), thus the analogy doesn't fit.
My guess is that you're most unsettled by the fact that Eliezer gave
no evidence that there is no evidence. But the fact that he's an
expert in the field and claiming that there is no evidence is
evidence of no evidence. Bayesian evidence, anyway, which is the
only kind that really counts.
On Nov 20, 2006, at 3:52 PM, Philip Goetz wrote:
> This is why I think the larger goals of friendless will be met not by
> trying to control AIs, but by trying to design them to have those
> attributes that we value in humans, and in trying to set up the
> initial conditions so that the ecosystem of AIs will encourage
> cooperation, or at least something better than winner-takes-all.
I'm not sure anyone serious advocates trying to control an arbitrary
optimization process (OP*) anymore. Through Eliezer's AI box
experiments, and the simple realization that humans cannot conceive
of everything, we know that, although boxing an arbitrary, let alone
a superintelligent, OP might be an interesting thought experiment in
AI engineering, it's not something we can rely on to protect us.
I agree. I think the best strategy anyone has yet developed that I
know of is to build an OP that will (i) want to become
superintelligent, and (ii) do so in a way we would want it to if we
knew more, thought faster, etc..
-- -- -- -- -- -- -- -- -- -- -- -- -- --
e-mail: firstname.lastname@example.org PGP: 0xBBD3B003
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT