From: Daniel Radetsky (daniel@radray.us)
Date: Sun Mar 20 2005 - 15:17:46 MST
I'm not James, but he doesn't seem to be responding, so I will.
On Tue, 15 Mar 2005 17:29:37 -0800
"Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
> Do you believe this limitation on intelligence holds true for the
> infinite-computing power version of AIXI? That is, do you think it's a
> hard information-theoretical limit, rather than an inefficiency of
> bounded computing power?
What's "it"? You need to ask, "Consider this self-improving AI; does it have
enough information to discover X with infinite computing power?" I think that
*some* AI would be sufficiently ignorant that it would be an information-theory
limit
> Also, would you care to quantify the minimum environmental information
> required to produce a model capable of manipulating a human? To a guess
> if you cannot calculate.
What would it be quantified in? The number of facts? The size of the program?
Or are you asking James to provide the units as well?
Yours,
Daniel
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:55 MST