From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue May 25 2004 - 16:07:40 MDT
Ben Goertzel wrote:
>
> Michael Wilson wrote:
>
>>The correct mode of thinking is to constrain the behaviour of
>>the system so that it is theoretically impossible for it to
>>leave the class of states that you define as desireable. This
>>is still hideously difficult,
>
> I suspect (but don't know) that this is not merely hideously difficult
> but IMPOSSIBLE for highly intelligent self-modifying AI systems. I
> suspect that for any adequately intelligent system there is some nonzero
> possibility of the system reaching ANY POSSIBLE POINT of the state space
> of the machinery it's running on. So, I suspect, one is inevitably
> dealing with probabilities.
Odd. Intelligence is the power to know more accurately and choose between
futures. When you look at it from an information-theoretical standpoint,
intelligence reduces entropy and produces information, both in internal
models relative to reality, and in reality relative to a utility function.
Why should high intelligence add entropy?
It seems that if I become smart enough, I must fear making the decision to
turn myself into a pumpkin; and moreover I will not be able to do anything
to relieve my fear because I am too smart.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT