RE: The Geddes conjecture

From: Marc Geddes (
Date: Sat Jul 16 2005 - 00:42:40 MDT

--- Ben Goertzel <> wrote:

> Unfortunately, I think the arguments actually hurt
> your conjecture.
> At the start, it sounds highly improbable, but
> appealing enough if it were
> true that it's worth considering..
> But the arguments are SO bad that they actually make
> the conjecture seem
> even less likely than it did at the start, at least
> from my point of view
> ;-)
> But anyway, it was an entertaining email to wake up
> to today, thanks...
> -- Ben

O.K, I made 'scrambled eggs' (garbled stuff) for
arguments again. That's what happens when one tries
to convert intuitions into words *sigh*

Let me try to take another crack at it. Warning:
This new argument tack is highly abstract though.

So why do I think unfriendly utilities limit

Here's the argument:

I've identified a distinction between *projection* on
the one hand, and *prediction* on the other. Now
*projection* is a higher level form of prediction. I
define *projection* to be a cross domain integration
between two different forms of prediction:

*Predictions about inanimate proccesses
*Predictions about volitional agents

Clearly there are two different levels of description
here and *Projection* has to integrate the two. So
*Projections* are a higher level form of prediction.

I hold that intelligence is about *projection*, not
*prediction*. Real intelligence has to reason about
future outcomes which involve the mixing of volitional
agents with inanimate proccesses - that's
*Projection*, as I just defined it above.

Now scientific prediction, as it is ordinarily
defined, does not allow the mixing of levels of
organization in this way. Scientific prediction
assumes a sharp distinction between inanimate
processes on the one hand and volitional agencies on
the other. (In order to objectively study something,
the system under study has to be completely isolated
from interference by our own volition). So the hidden
assumption behind ordinary scientific *prediction* is
that there is no mixing between our own volitional
agency and the inanimate proccess under study. Can
you see the distinction between my definition of
'Projection' and ordinary 'Prediction'?

Now if we allow that intelligence is really about
'Projection' and not 'Prediction', then there's some
room for new kinds of theories.

And here is where my argument gets really abstract and
fuzzy. I just defined 'Projection' to be a
cross-domain integration between predictions about
volitional agency and predictions about inanimate
process. By allowing a mixing between levels of
description there is room for a link to be established
between utilites (our goal system - our morality) and
projections (our prediction system - our

With this link established, it's now plausible that
you can't just have any old morality and still have
real intelligence. Because real intelligence is about
*projection* and not just prediction, there's a
cross-domain link between utilities and predictions,
so the wrong kind of morality might interfere with our
prediction system.

The theory here is that unfriendly utilities are the
utilites which interfere with projections in such a
way as to reduce their effectiveness. Friendly
utilies, on the other hand, are the one's which
increase (better enable the actualization of) the
projection system. If this is so, intelligence would
be bounded (limited by) the degree of friendliness.

THE BRAIN is wider than the sky,  
  For, put them side by side,  
The one the other will include  
  With ease, and you beside. 
-Emily Dickinson
'The brain is wider than the sky'
Please visit my web-site:
Mathematics, Mind and Matter
Send instant messages to your online friends 

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT