From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Tue Feb 28 2006 - 21:55:45 MST
Mitchell Porter wrote:
> As it is apparently unsafe to discuss the theory of self-enhancement
> in public, I'd like to suggest the following topic: how to make AIXI
> (or AIXItl, or Schmidhuber's Godel machine) Friendly. AIXItl, if
> Hutter is correct, is an optimal general problem solver, but it is
> massively slow. There would appear to be no prospect of anyone
> attaching an UnFriendly supergoal to an AIXI engine and creating a
> threat. But solving the problem of "FAIXI" could be good practice.
Sounds like http://sl4.org/wiki/SimplifiedFAI
Note: you *must* simplify down the FAI problem if you want a
modification to AIXI which solves an interesting problem in FAI without
being a constructive theory of AI. It presently seems that a real
solution to FAI, in all its dimensions, would need to exploit enough
regularity in the problem to qualify also as a constructive theory of
AI. But I have no objection to your trying to solve the problem in
SimplifiedFAI, which, obviously, a full solution would also need to
solve. I note that you would probably do better to try and specify
aspects of the problem rather than trying to think up a solution; the
Wiki page is unfinished.
>>> The aim should always be, to turn all of these into well-posed
>>> problems of theoretical computer science, just as well-posed as,
>>> say, "Is P equal to NP?"
>> On ordinary computers or quantum ones? For our current model of
>> physics or actual physics?
> I agree with the computer scientist Scott Aaronson that this is
> basically a question of mathematics, not of physics. By definition,
> it's about whether P and NP are the same *for a Turing machine*. If
> you're in a universe with super-Turing computational primitives, what
> you can do in polynomial *physical* time may be different, but for
> conceptual clarity, I'd prefer to keep the theory of computational
> complexity distinct from the contingencies of physics.
Fair enough if you're doing mathematics. But if it *mattered* that P
not be equal to NP, if a real-world FAI result rested on and relied on
that outcome, then indeed physics might prove relevant. It is one of
the things you would have to throw at a proposed solution to attempt to
falsify it. In math you get to specify your assumptions. FAI has to
work *in the real world*.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT