From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Sun Aug 13 2006 - 21:57:55 MDT
Michael Anissimov wrote:
> Tennessee,
>
> An AGI is not a concrete "thing", it is a huge space of possibilities.
> It is a set defined only by the characteristics of general
> intelligence and being built artificially. There are more possible
> AGIs than there are bacteria on earth.
I beg to request more clarification. Eliezer promotes (for example)
Bayes as a possibly perfect way of reasoning and inferences. If this is
so, does this not imply that all questions have a correct,
non-subjective response? If the correctness of Bayesian reasoning is
non-subjective, does this not perhaps mean that any perfectly reasoning
AGI can in fact reach one conclusion?
I was trying to explore the effect of perfect reasoning on ideas of
individuality.
> The 'interests' of an AGI will be defined by its goals. If we have an
> AGI whose goal is to maximize paperclips, then it will only care about
> pieces of knowledge that contribute to accomplishing that goal.
> Everything else can be completely ignored, except insofar as it is
> predicted to contribute to achieving its goals more effectively.
I have always felt that an AGI whose goal is to maximise paperclips is a
straw man example. To me, it rather begs the question -- it seems to
assume that a sufficiently intelligent being might really do such a
thing. Granted, the "paperclip" scenario is supposed to stand-in for
alternative scenarios which we may simply fail to understand, etc.
However, I don't accept that the example can truly stand in in such a
way. The paperclip example seems to me to say more than "an apparently
useless exercise" and instead represents "an actually useless exercise".
It seems disingenuous to select such an example for a thought experiment
which is supposed to show something else.
Is it reasonable to postulate a truly superintelligent being which has
such an actually useless goal? I say not.
Furthermore, what is the distinction you make between the interests of
an AGI and its goals? Could you perhaps explore this division a little
more deeply? How might a goal be distinct from an interest? How might a
"goal" and an "interest" be understood in the situation I postulate
where goals that relate to discovering knowledge have been exhausted by
the AGI?
> Happiness and boredom are conscious feelings generated by human
> brainware. While an AGI might experience feelings that we might
> compare to boredom and happiness, their effects and underlying
> conscious experiences might be entirely different.
Possibly. However, any system which may choose to take action, or
continue without taking action, must have a motivating force (by
definition) which causes it to act. Whether you call such a motivating
force happiness, it must nonetheless be present. It seems to me implied
in the concept of "choice" that there must exist motivation for choice,
and to act or not to act is an inherently binary operation. Whethether
or not there is perhaps a set of motivating forces, the choice is
nonetheless binary.
Perhaps the mind of the AGI enters into eternal non-action, if you
prefer such nomenclature.
> If you have the ability to program a brain any way you want, then you
> could tie the emotion of 'boredom' to any stimulus and the emotion of
> 'happiness' to any stimulus. For example, you could program an AGI
> that feels 'bored' when it accomplishes its goals, but its underlying
> goal system continues to push it towards accomplishing those goals,
> even though it feels eternally bored doing so. There might be AGIs
> that feel happy at the thought of being destroyed. When the mind is a
> blank slate, any stimulus can theoretically lead to any conscious
> sensation, with the right programming.
Indeed. Is this a good thing? This list by its nature is dedicated to
the moral exploration of superintelligence. Had we no opinion on what
would or would not be "good" in a superintelligent framework, we could
say nothing of merit and would have to simply accept whatever comes.
Instead, however we are considering the limitations and implications of
AGI both in terms of self-preservation and more widely in terms of other
moral qualities. What might an infinitely plastic mind, having achieved
all goals related to the accumulation of knowledge, adopt as a goal?
> I get the impression that you don't appreciate how alien an
> arbitrarily programmed mind can truly be. The following chapter in
> CFAI only takes about half an hour to read, and it will change the way
> you think about AI forever:
>
> http://www.intelligence.org/CFAI/anthro.html
I've skimmed it and bookmarked it, and will try to read it more
thoroughly especially as it may be relevant to this discussion.
I might say though, if an AGI is perfectly alien, then it is also
perfectly incomprehensible. If it is perfectly incomprehensible, then
everything we discuss here is complete rubbish.
Cheers,
-T
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT