No pure predictors (was: Singularity Blues)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Apr 06 2005 - 14:18:16 MDT


Hal Finney wrote:
>
> Artificial Intelligence literally just refers to the intelligence aspect,
> which is the processing/predicting/modelling part of the brain. Only when
> you marry some kind of goal to an intelligence do you get a volitional
> being, one which can take actions in the world to achieve its goals.

Hal, I warn strongly against trying to compartmentalize intelligence
this way. In pure mathematics it sometimes makes sense to distinguish
probability theory from decision theory, Bayes from expected utility.
But as far as I can tell any actual intelligence needs both the decision
and the prediction component, even if all you think you want from it is
pure prediction. To model the world well using bounded computing power,
even an allegedly pure predictor must:

(A) Choose how to spend its limited computing power (choose what to
think about).
(B) Choose which experiments to perform - determine the information
value of information.

and of course, any *interesting* optimization process will

(C) Choose which modifications to make to its own code/substrate that
improve its prediction ability given its limited computing power.

But even without (C) you simply cannot disentangle decision from
prediction on real-world systems, not if you expect to have any sort of
decent, efficient, generalized prediction power. Decision-less
prediction systems will be predefined specialized algorithms and
probably quite stupid. You're trying to pry apart two things that don't
come apart in real-world systems. You're trying to fence the big
diamond, not by breaking it into smaller diamonds, but by selling one
facet at a time.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:55 MST