Re: ESSAY: Program length, Omega and Friendliness

From: Eliezer S. Yudkowsky (
Date: Wed Feb 22 2006 - 14:55:03 MST

Ben Goertzel wrote:
> I conjecture that achieving powerful general intelligence within
> plausible computational resources involves integrating a variety of
> components involving differing levels of specialization. (This is
> different from AIXItl or godel machine type architectures, which are
> very simple but do not operate well within plausible computational
> resources.) If this is true then making a vastly more intelligent AI
> may involve integrating a large number of different components, at
> various levels of specialization. In this case the "knowledge about
> the external world" is present in the AI system not only explicitly as
> data but implicitly in the detailed design of the specialized
> components. The more specialized components, the greater the
> algorithmic information.

I can't properly respond to this without going into greater length than
I can afford right now. On the approach I am presently taking,
information about the external world would mix into the AI's cognitive
dynamics, but that extra complexity would still orthogonalize out of the
Friendly invariant that got verified. Only the nature of the mix, the
structure of how the mix happened, would be part of what was verified as
the Friendly invariant. The AI would prove about itself that its future
self would keep following a certain specification of mixing.

>> A cellular automaton could give rise to a whole universe full of
>>sentient creatures and superintelligent AIs, while still having almost
>>trivial algorithmic complexity.
> Yes, but it cannot do so within a brief period of time.

You don't actually know that. You have not checked all simple CAs. I
agree with your intuition, mind you, but it is not a known fact.

> I conjecture that one would need to build a system with a pretty high
> algorithmic information...
> Yes, but what you are alluding to is an intelligence process that is
> like AIXItl or evolutionary learning in that it is a simple algorithm
> carrying out a sort of semi-exhaustive, heuristically-guided program
> space search.

I am alluding to a core that is a properly structured full intelligence.
  I don't throw brute force at problems I don't understand. Neither do
I say, "I bet this requires a lot of algorithmic complexity and highly
specialized components." When I run into something I don't understand,
I keep gnawing away until it ceases to be a mystery unto me. Then,
generally speaking, the resolved mystery turns out not to require
massive amounts of hardware, massive amounts of specialized code,
massive amounts of knowledge, or any of the other things that people
imagine being necessary when they run into a problem that *feels*
extremely difficult because they have no idea or very vague ideas of how
it works.

> I think that AGI's need to have this aspect, but they also need a
> whole bunch of more specialized and space-intensive code, in order
> that their intelligent behavior may have reasonable time-complexity.
> None of your comments are addressing the issue of tradeoffs between
> space and time complexity, which I believe are conceptually
> fundamental.

Your objection rests upon personal conjectures which are not known
mathematical results and which I do not share. I think there can be a
core of bounded algorithmic complexity which decompresses itself into a
powerful AI in reasonable time. If you think this requires vast amounts
of additional startout complexity to run in reasonable time, you could
be right, but I would be surprised. And even so it would only present a
problem for FAI if you couldn't run the optimized algorithms through a
core checker simple enough to be reliable. I will cross that bridge
only if I must come to it.

Don't forget to identify which parts of your argument are math and which
parts are your personal intuition; I wouldn't want people to think there
was a known mathematical problem with FAI. Please remember that others
may not trust your intuitions as much as you do.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT