RE: The Geddes conjecture

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jul 16 2005 - 07:18:59 MDT


So your new argument is that

1) intelligence requires some kind of non-scientific ability to model and
project the behavior of other minds (presumably using some kind of heuristic
internal simulation process, or else quantum resonance, or whatever...)

2) unfriendliness interferes with 1

But even if 1 is true (and I don't see why modeling other minds needs to be
so mysterious and unscientific), 2 doesn't follow at all....

Some very unfriendly humans are very good at modeling other human minds
internally, unfortunately...

ben

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Marc
> Geddes
> Sent: Saturday, July 16, 2005 2:43 AM
> To: sl4@sl4.org
> Subject: RE: The Geddes conjecture
>
>
>
> --- Ben Goertzel <ben@goertzel.org> wrote:
>
>
> > Unfortunately, I think the arguments actually hurt
> > your conjecture.
> >
> > At the start, it sounds highly improbable, but
> > appealing enough if it were
> > true that it's worth considering..
> >
> > But the arguments are SO bad that they actually make
> > the conjecture seem
> > even less likely than it did at the start, at least
> > from my point of view
> > ;-)
> >
> > But anyway, it was an entertaining email to wake up
> > to today, thanks...
> >
> > -- Ben
> >
>
> O.K, I made 'scrambled eggs' (garbled stuff) for
> arguments again. That's what happens when one tries
> to convert intuitions into words *sigh*
>
> Let me try to take another crack at it. Warning:
> This new argument tack is highly abstract though.
>
> So why do I think unfriendly utilities limit
> intelligence?
>
> Here's the argument:
>
> I've identified a distinction between *projection* on
> the one hand, and *prediction* on the other. Now
> *projection* is a higher level form of prediction. I
> define *projection* to be a cross domain integration
> between two different forms of prediction:
>
> *Predictions about inanimate proccesses
> *Predictions about volitional agents
>
> Clearly there are two different levels of description
> here and *Projection* has to integrate the two. So
> *Projections* are a higher level form of prediction.
>
> I hold that intelligence is about *projection*, not
> *prediction*. Real intelligence has to reason about
> future outcomes which involve the mixing of volitional
> agents with inanimate proccesses - that's
> *Projection*, as I just defined it above.
>
> Now scientific prediction, as it is ordinarily
> defined, does not allow the mixing of levels of
> organization in this way. Scientific prediction
> assumes a sharp distinction between inanimate
> processes on the one hand and volitional agencies on
> the other. (In order to objectively study something,
> the system under study has to be completely isolated
> from interference by our own volition). So the hidden
> assumption behind ordinary scientific *prediction* is
> that there is no mixing between our own volitional
> agency and the inanimate proccess under study. Can
> you see the distinction between my definition of
> 'Projection' and ordinary 'Prediction'?
>
> Now if we allow that intelligence is really about
> 'Projection' and not 'Prediction', then there's some
> room for new kinds of theories.
>
> And here is where my argument gets really abstract and
> fuzzy. I just defined 'Projection' to be a
> cross-domain integration between predictions about
> volitional agency and predictions about inanimate
> process. By allowing a mixing between levels of
> description there is room for a link to be established
> between utilites (our goal system - our morality) and
> projections (our prediction system - our
> intelligence).
>
> With this link established, it's now plausible that
> you can't just have any old morality and still have
> real intelligence. Because real intelligence is about
> *projection* and not just prediction, there's a
> cross-domain link between utilities and predictions,
> so the wrong kind of morality might interfere with our
> prediction system.
>
> The theory here is that unfriendly utilities are the
> utilites which interfere with projections in such a
> way as to reduce their effectiveness. Friendly
> utilies, on the other hand, are the one's which
> increase (better enable the actualization of) the
> projection system. If this is so, intelligence would
> be bounded (limited by) the degree of friendliness.
>
>
>
>
>
>
>
>
> ---
>
> THE BRAIN is wider than the sky,
> For, put them side by side,
> The one the other will include
> With ease, and you beside.
>
> -Emily Dickinson
>
> 'The brain is wider than the sky'
> http://www.bartleby.com/113/1126.html
>
> ---
>
> Please visit my web-site:
>
> Mathematics, Mind and Matter
> http://www.riemannai.org/
>
> ---
>
> Send instant messages to your online friends
> http://au.messenger.yahoo.com
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT