From: Samantha Atkins (firstname.lastname@example.org)
Date: Tue May 25 2004 - 23:05:03 MDT
On May 25, 2004, at 3:07 PM, Eliezer Yudkowsky wrote:
> Ben Goertzel wrote:
>> Michael Wilson wrote:
>>> The correct mode of thinking is to constrain the behaviour of the
>>> system so that it is theoretically impossible for it to leave the
>>> class of states that you define as desireable. This is still
>>> hideously difficult,
>> I suspect (but don't know) that this is not merely hideously difficult
>> but IMPOSSIBLE for highly intelligent self-modifying AI systems. I
>> suspect that for any adequately intelligent system there is some
>> possibility of the system reaching ANY POSSIBLE POINT of the state
>> of the machinery it's running on. So, I suspect, one is inevitably
>> dealing with probabilities.
> Odd. Intelligence is the power to know more accurately and choose
> between futures. When you look at it from an information-theoretical
> standpoint, intelligence reduces entropy and produces information,
> both in internal models relative to reality, and in reality relative
> to a utility function. Why should high intelligence add entropy?
In practice it is exceedingly difficult to fully predict and/or
understand the behavior of a complex system, even one far less complex
and more static than a SIAI. It may seem counter-intuitive to you or
even illogical but it is nonetheless true. You also believe that an
intelligence can be created/grown that is so intelligent as to fully
understand itself. Whether this belief plays out or not is open to
conjecture. Some of us who build fairly complex software systems for
a living have doubts about the likelihood of some your beliefs.
> It seems that if I become smart enough, I must fear making the
> decision to turn myself into a pumpkin; and moreover I will not be
> able to do anything to relieve my fear because I am too smart.
If you become smart enough you might stop trivializing perfectly valid
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT