Re: Predictions versus Projections

From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Fri Jul 15 2005 - 10:25:25 MDT


Marc Geddes <marc_geddes@yahoo.co.nz> wrote:
Here's something subtlely wrong with the notion that
intelligence is all about 'prediction'. I prefer the
word 'projection'. It would be more accurate to say
that intelligence is all making correct *projections*
rather than saying that it's all about making correct
*predictions*. What's the difference?
Well the way the word 'prediction' is normally used,
it means predicting outcomes in the *physical
sciences*. The physical sciences deal with inanimate
objects - or objects that are (in the limit) totally
isolated from interference from volitional entities
(like humans). When predicting a solar eclipse for
instance, scientists are assuming that no one is going
to come along to influence the sun in such a way that
stops the ecilpse from happening. For instance if a
trickster alien were to use advanced technology to do
something to the sun, the prediction of an ecilpse
could be invalidated. So the hidden assumption in
'predictions' in the physical sciences in that systems
are totally isolated from *volitional agencies*
(conscious entities that might interfere with the
results)
*Projections* are slightly different from predictions,
because projections are *possible outcomes* that can
include actions by volitional entities. So they can
*mix* agency (volition) with inanimate objects.
Now once you mix inanimate and animate objects in
making projections, there's a link then established
between utilities (goals of sentient beings) and
predictions (movements of inanimate objects). And
it's this link that busts Bayes and allows the
possibility of an objective morality.
Bayesian reasoning (induction) assumes as a limit that
a system is isolated from interference from one's own
volition. Bayes is about making *predictions*. But
as I just explained, real intelligence is about
*projections*, where the movements of inanimate
objects are mixed with the actions of sentient beings.
An unfriendly goal system may place bounds on
intelligence, because unfriendly goals might be the
one's to cause agency (volition) to mix with inanimate
objects in such a way as to interfere with accurate
*projections*

  Granting the rest of your scheme, for the moment. What makes you so sure that AGI will be a sentient being, and not just another class of inanimate object, which seems to be the composition of everything else not conscious, in your ontology. If AGI can be made to be sentient, it seems to make friendliness a whole lot easier to instill. But apart from a "Blue-Brain XVII" creation, I don't think the world's 1st AGI will be a conscious actor in any sense.

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT