Re: On the dangers of AI (Phase 2)

From: Richard Loosemore (
Date: Wed Aug 17 2005 - 13:49:04 MDT

justin corwin wrote:
> On 8/17/05, Richard Loosemore <> wrote:
>>Allow me to illustrate. Under stress, I sometimes lose patience with my
>>son and shout. Afterwards, I regret it. I regret the existence of an
>>anger module that kicks in under stress. Given the choice, I would
>>switch that anger module off permanently. But when I expressed that
>>desire to excise it, did I develop a new motivation module that became
>>the cause for my desire to reform my system? No. The desire for reform
>>came from pure self-knowledge. That is what I mean by a threshold of
>>understanding, beyond which the motivations of an AI are no longer
>>purely governed by its initial, hardwired motivations.
> You are misunderstanding here. You *already have* desires to reform
> yourself. Humans are inconsistent, with multiple sources of
> motivation. You presumably love your son, and desire to be a good
> person. These motivations come from a different source than your
> temporary limbic rage, and are unaffected in intensity and
> directionality. Hence, those motivations view anger as orthogonal to
> your goals of loving your son, and being a good person.

No, once again, I protest that I am not underestimating the
sophistication of the cognitive system.

You are correct, I will grant you, that my example may simply be a case
of one motivation system triumphing over another (love vs. limbic rage),
but I would argue back to you that on this particular point we would
have to come down to brass tacks and look into the design of the system
to find out if the driver of my reformist tendency was that other
motivation system, or something higher like self-knowledge. You simply
cannot assert that it is definitely and obviously a hard-wired
motivation system that is doing it, without showing why it has to be so.

But in that case my example was not good enough evidence.

Here I will quickly try again, by way of your other comment:

> "Pure self-knowledge" doesn't change anything about your total
> motivations. You already wanted to be a more consistent person, which
> includes revising some of your inconsistent, less powerful human
> motivations. You'll notice, if you examine yourself carefully, that
> you have little desire to reform your most important, cherished
> beliefs. This is probably not because they are objectively the best,
> but rather because they are the things that are important to you, they
> comprise central portions of your motivations.

I will declare, here and now, that I just introspected, and you are
mistaken (oops, I feel like we just went back to the age of the
introspectionists. gulp).

Do I really have "little desire to reform [my] most important, cherished

That is not so. I happen to have some anti-desires (repugnances) that
are quite strong. I do not want to become a woman, do not want to be
gay (no disrespect to transgender/gay folks here: just expressing my
personal feelings about *my* lifestyle choices), and I am deeply
repelled by drugs.

Now, I know I *could* in some future world, flip a switch in such a way
that I would enjoy being gay, or being a woman. I know that after I
flipped the switch, I would enjoy it deeply, even though the thought
makes my skin crawl right now.

Would I flip the switch? I would be happy to do it.

Now please explain which motivation system allowed me to make that choice?

I made the choice, I claim, at a pure thought level. I am not just a
slave to the cruder parts of my motivation system.

Or at least, if I cannot yet prove that it happened at a level above the
slave-to-motivation level, can you convincing argue that it *could not*
have originated at that higher level?


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT