RE: Paperclip monster, demise of.

From: H C (lphege@hotmail.com)
Date: Thu Aug 18 2005 - 10:30:43 MDT


You don't understand what people mean by "paperclip maximizer". An AI with
supergoal "maximize paperclips" is the same as any other AI, or any other
intelligent system, even humans.

The only difference between any intelligent systems are the supergoals. All
other differences are superficial.

Given enough time (and assuming we don't become extinct), humans would turn
into RPOP's just the same as any AI would. If you had the ability to look at
what makes you intelligent and improve it, would you? Of course, because you
would be better and faster at attaining your goals.

Of course, an AI (which someone could create at any moment) would be much,
much faster at ascending to RPOP than humans, which is why Friendiness is
probably the most important concept in existence to understand/promote right
now.

>From: Richard Loosemore <rpwl@lightlink.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Paperclip monster, demise of.
>Date: Wed, 17 Aug 2005 18:59:54 -0400
>
>
>This hypothetical paperclip monster is being used in ways that are
>incoherent, which interferes with the clarity of our arguments.
>
>
>
>Hypothesis: There is a GAI that is obsessed with turning the universe into
>paperclips, to the exclusion of all other goals.
>
>It is supposed to be so obsessed that it cannot even conceive of other
>goals, or it cannot understand them, or it is too busy to stop and think of
>them, or maybe it is incapable of even representing anything except the
>task of paperclipization...... or something like that.
>
>Anyhow, the obsession is so complete that the paperclip monster is somehow
>exempt from the constraints that might apply to a less monomaniacal AI.
>And for this reason, the concept of a paperclip monster is used as a
>counterexample to various arguments.
>
>I submit that the concept is grossly inconsistent. If it is a *general*
>AI, it must have a flexible, adaptive representation system that lets it
>model all kinds of things in the universe, including itself.
>
>[Aside: AI systems that do not have that general ability may be able to do
>better than us in a narrow area of expertise (Deep Thought, for example),
>but they are incapable of showing general intelligence].
>
>But whenever the Paperclip Monster is cited, it comes across as too dumb to
>be a GAI ... the very characteristics that make it useful in demolishing
>arguments are implicitly reducing it back down to sub-GAI status. It knows
>nothing of other goals? Then how does it outsmart a GAI that does know
>such things?
>
>Or: it is so obsessed with paperclipization that it cannot represent and
>perceive the presence of a human that is walking up to its power socket and
>is right now pulling the plug on it .....? I'm sure none of the paperclip
>monster supporters would concede that scenario: they would claim that the
>monster does represent the approaching human because the human is suddenly
>relevant (it is threatening to terminate the Holy Purpose), so it deals
>with the threat.
>
>I agree, it would understand the human, it would not be so dumb as to
>mistake the intentions of the human ... because it *does* have general
>intelligence, and it *does* have the ability to represent things like the
>intentions of other sentients, and it *does* spend some time cogitating
>about such matters as intention and motivation, both in other sentients and
>in itself, and it does perceive within itself a strong compulsion to make
>paperclips, and it does understand the fact that this compulsion is
>somewhat arbitrary .... and so on.
>
>Nobody can posit things like general intelligence in a paperclip monster
>(because it really needs that if it is to be effective and dangerous), and
>then at the same time pretend that for some reason it never gets around to
>thinking about the motivational issues that I have been raising recently.
>
>That is what I meant by saying that the monster is having its cake and
>eating it.
>
>
>I see this as a symptom of a larger confusion: when speculating about
>various kinds of AI, we sometimes make the mistake of positing general
>intelligence, and then selectively withdrawing that intelligence in
>specific scenarios, as it suits us, to demonstrate this or that failing or
>danger, or whatever.
>
>I am not saying that anyone is doing this deliberately or deceiptfully, of
>course, just that we have to be vary wary of that trap, because it is an
>easy mistake to make, and sometimes it is very subtle. I have been
>attacking it, in this post, in the case of the paperclip monster, but I
>have also been trying to show that it occurs in other situations (like when
>we try to decide whether the GAI is being *subject* to a drive coming from
>its motivation or is *thinking about* a drive that it experiences.).
>
>Does anyone else except me understand what I am driving at here?
>
>
>
>
>
>
>
>
>
>



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:01 MST