From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Sun Dec 19 2004 - 10:01:09 MST
I think I might have just worked out a basic theorem of relevance to artificial
general intelligences. I'd be interested to know what you think.
Let's postulate that an AGI is created that is committed to generating change
in the universe (possibly fast or even accelerating change). Let's also
postulate that this AGI wishes to persist through deep time (and/or that the
AGI wishes some other entity or attribute to persist through deep time - note:
this bracketted addendum is not necessary for the argument if the AGI wishes
itself to persist).
In the face of a changing world, where there is at least one thing that the AGI
wishes to survive with (effectively) 100% certainty through deep time, then the
AGI will need to *systematically* generate a stream of changes that 'locally'
offset the general change in the universe sufficient to enable the chosen thing
to persist.
Conclusion: This means that an AGI that wants to persist through deep time
(or that wants anything else to persist through deep time) will need to devote
sufficient thinking and action time and resources to successfully managing its
persistence agenda. In a reality of resource constraints, the AGI will need to
become highly efficient at pursuing its persistence agenda (given the
tendency for changes in the universe to radiate/multiply) and it will (most
likely) need to manage its broader change promotion agenda so as not to
make its persistence agenda too hard to fulfill.
What do you think?
Cheers, Philip
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:50 MST