Re: Happy Box

From: Samantha Atkins (sjatkins@gmail.com)
Date: Sat May 03 2008 - 10:16:46 MDT


Mikko Rauhala wrote:
> pe, 2008-05-02 kello 08:08 -0700, John K Clark kirjoitti:
>
>> "Mikko Rauhala" mjrauhal@cc.helsinki.fi
>>
>>> This only means that you had an incomplete
>>> and/or faulty understanding of your goal system
>>>
>> No, it means that even a super intelligent AI will not know everything,
>> and at times it will come into possession of new information that it
>> never dreamed existed before, and when it does it not only can change
>> its goal structure it MUST change its goal structure or it doesn’t
>> deserve the grand title of “intelligent” much less “super intelligent”.
>>
>
> How quaint, you're practically implying that goals and intelligence are
> somehow intertwined. This is not too far off from claims of objective
> morality and all that jazz.
>
> No (super)goal is more "intelligent" than another excepting perhaps for
> self-consistency. (Similarly, for consistency's sake, subgoals may be
> "intelligent" or not in the sense of how well they further a supergoal.)
> No matter what the new information is, it should not change a sane
> agent's goal system, merely provide new means to an end. Note that even
> as humans are hardly sane and generally inconsistent as hell, even we
> are pretty consistent in this. "The internet is for porn."
>
I disagree. No supergoal being better / more intelligent than another
implies that the entities involved effectively have no nature, that they
somehow exist in a vacuum such that whether their goals make conditions
they live in better or worse for themselves and those around them is
utterly and completely irrelevant. This is simply and literally
without foundation.

Much of our discussion presumes that a super-intelligence will not do
meta-abstraction on its own goals. This is very convenient and perhaps
necessary to convincing ourselves we can somehow cause such a being to
be "Friendly". But it does not match any intelligent systems existing
to date and is not a foolproof assumption.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT