From: Pavitra (celestialcognition@gmail.com)
Date: Thu Oct 08 2009 - 15:56:08 MDT
Warning: longish post.
John K Clark wrote:
> On Wed, 07 Oct 2009 13:32:59 -0500, "Pavitra"
> <celestialcognition@gmail.com> said:
>
>> You're anthropomorphizing.
>
> Yes, but you almost make that sound like a bad thing. At the moment
> human minds are the only minds we have to study
That's not entirely true. We have animal, insect, and plant
intelligences; we have operating systems, search engines, applications,
and IRC bots; and we have AI projects of various pre-AGI degrees of
sophistication.
> so it's not unreasonable
> to suspect that future hypothetical minds will not be different from our
> own in EVERY conceivable way.
There are enough attributes of minds that any given future mind will
probably resemble ours in at least one aspect, but there are enough
possible minds that any given future mind will almost certainly not
resemble ours in any given aspect.
> If you disagree and think an AI would be
> completely inscrutable then I don't understand how you can be so
> confident in being able to train it so that it will obey your every
> command like a trained puppy till the end of time.
That's a flawed analogy. A puppy has an existing baseline desire system,
which the trainer exploits to mold the puppy's behavior to the trainer's
wishes. A computer programmer can _write_ the initial baseline so that
the AI _intrinsically_ wants what the trainer prefers it to want.
> Like any tool
> anthropomorphizing can be misused but it is not a 4 letter word.
It can be used well, but I believe that you are not doing so in this
particular case.
>>> People have developed a sense of absurdity and there is no reason a
>>> superior being wouldn't too. Mr. Jupiter Brain is bound to wonder
>>> why he is in the absurd position of valuing human slug well being
>>> above his own, and it wouldn't take him long to come to an answer,
>>> and a solution. A solution that we might not like much.
>>
>> Unless there's a specific reason it *would*
>> develop a sense of absurdity, the mere complexity of the hypothesis is a
>> reason it wouldn't develop it simply by chance.
>
> Any intelligent mind is going to be exposed to huge amounts of data, it
> will need to distinguish between what is important and what is not.
> Sometimes this is difficult, sometimes it's easy, sometimes it's
> absurdly easy.
There is a difference between factual or propositional absurdity (the
sky is green, humans like cardboard-flavored ice cream, inspecting this
particular mote of dust very closely is likely to yield lots of
information about the price of tea in China) versus moral or
prescriptive absurdity (I should save this slug's life, I should examine
this dust mote, I should construct orbital teapots).
Any being capable of critiquing its desire to save slugs is necessarily
doing so in terms of some even more fundamental framework (e.g., a sense
of pride or dignity). If it is capable of critiquing its desire for
pride/dignity, then there must be yet another even higher framework.
Eventually, the recursion bottoms out, and the being is found to have a
top-level framework that it is incapable of critiquing. It may be able
to describe scientifically why it came to have that particular
framework, but it will not be able to desire that that framework were
otherwise.
This appears to me to be _tautologically_ true: I cannot conceive of any
well-defined mind specifiable in a finite amount of information for
which it is false.
>> I would expect a given intelligence to have a
>> sense of absurdity if and only if it was evolved/designed to detect
>> attempts to deceive it.
>
> And of course the AI IS being lied to, told that human decisions are
> wiser than its own; and a AI that has the ability to detect this
> deception will develop much much faster than one who does not.
Perhaps, but that's not sufficient. What would cause a design that
detects lies to be selected over one that falls for them? Is the AI
being developed using a genetic programming framework?
Or are you simply proposing that, of many AGI projects being developed
in various labs worldwide, one that detects lies will reach critical
mass first? If so, then why would lie-detection particularly be the key
deciding factor, rather than (say) implementation in
$YOUR_FAVORITE_PROGRAMMING_LANGUAGE or running on high-end hardware?
Also, what exactly do you mean by "wiser"? It is not an empirical fact
that "You should build teapots in space" is an unwise decision while
"You should provide each human with a harem of catpeople" is a wise one.
Moral preference is defined relative to a particular mind. It is not an
ontologically intrinsic property common to all sufficiently intelligent
beings.
Have you read
<http://lesswrong.com/lw/rn/no_universally_compelling_arguments/> ?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT