From: Ben Goertzel (ben@goertzel.org)
Date: Tue May 25 2004 - 16:46:51 MDT
Ben wrote:
> > I suspect (but don't know) that this is not merely
> hideously difficult
> > but IMPOSSIBLE for highly intelligent self-modifying AI systems. I
> > suspect that for any adequately intelligent system there is some
> > nonzero possibility of the system reaching ANY POSSIBLE
> POINT of the
> > state space of the machinery it's running on. So, I
> suspect, one is
> > inevitably dealing with probabilities.
Eliezer wrote:
> Odd. Intelligence is the power to know more accurately and
> choose between
> futures. When you look at it from an information-theoretical
> standpoint,
> intelligence reduces entropy and produces information, both
> in internal
> models relative to reality, and in reality relative to a
> utility function.
> Why should high intelligence add entropy?
For roughly the same reason that your future is less certain than that
of a rock!
And, that you are more likely to turn yourself into a rock, than a rock
is to turn itself into a human.
It's true that intelligence allows the making of more accurate
predictions. But it also enables the creation of more complexly
indeterminate situations.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT