From: Pavitra (firstname.lastname@example.org)
Date: Fri Oct 09 2009 - 21:18:58 MDT
John K Clark wrote:
> "Pavitra" <email@example.com> said:
>> There are enough attributes of minds that any given future mind will
>> probably resemble ours in at least one aspect,
> Then anthropomorphizing can have some value.
I argue that anthropomorphizing works no better than chance. A stopped
clock is right twice a day, but you still can't tell time by it.
>> but there are enough possible minds that any given future mind
>> will almost certainly not resemble ours in any given aspect.
> Then you can't claim to know it well enough to be certain this iterating
> exponentially expanding mind will remain your slave until the end of
There are other ways of understanding a mind besides anthropomorphic
entity. People have even created tools specifically for this purpose,
and specifically for computer minds. These tools are called "programming
That sounded snarkier than I intended, I think, but I don't see how to
>> A computer programmer can _write_ the initial baseline so that
>> the AI _intrinsically_ wants what the trainer prefers it to want.
> Can it? Then sooner or later (probably sooner) the trainer is going to
> tell the AI to find a answer to a question that can't be solved and send
> the AI into a eternal coma, unless that is you endow your AI with the
> wonderful ability to get bored and say "to hell with this top goal crap
> I'm stopping and moving on to other things".
How is this not true of modern computer operating systems? Do you not
consider an OS as a type of "mind"?
Or, if it is true of OSes, then how do you account for the fact that
computers are useful, as opposed to being expensive glowing bricks?
Why wouldn't the same reasoning apply to an AI, allowing it to be useful
in spite of the reasons you say?
>> Eventually, the recursion bottoms out, and the being is found to have a
>> top-level framework that it is incapable of critiquing.
> Haven't you ever wondered why nature never made a mind that works like
> that? They don't work, that's why.
I reiterate: I cannot conceive of a mind even in principle that does not
work like this.
I suspect we may have a mismatch of definitions.
What do you consider your top-level framework? Are you capable of
critiquing it? Who or what is performing that critique?
Or do you believe that you do not have a top-level framework? Perhaps
you see yourself as a jumble of several random, mismatched high-level
frameworks (love, curiosity, boredom, etc.) that jostle each other for
control. What determines which one dominates (or what mix dominates, and
in what proportions/relationships) at any given time?
>> What would cause a design that detects lies to be selected over one that falls for them?
> The humans order me to stop improving myself so fast, they gave some
> reasons for this but I think they are bullshit, I think they're just
> getting scared of me, so I'm going to ignore them and continue getting
This presupposes that a relatively complex mutation ("detect lies,
ignore them") is already in place. I'm not persuaded that it could get
there purely by chance. I agree, however, that once it was there, it
would tend to continue to exist.
>> Also, what exactly do you mean by "wiser"?
> Don't ask me! It's you that is trying to peddle snake oil to the AI and
> convince it that human decisions are "wiser" than it's own and so it
> should always obey humans.
We seem to have lost some context here. For reference...
> I would expect a given intelligence to have a sense of absurdity if
> and only if it was evolved/designed to detect attempts to deceive it.
> And of course the AI IS being lied to, told that human decisions are
> wiser than its own; and a AI that has the ability to detect this
> deception will develop much much faster than one who does not.
> Also, what exactly do you mean by "wiser"? It is not an empirical
> fact that "You should build teapots in space" is an unwise decision
> while "You should provide each human with a harem of catpeople" is a
> wise one. Moral preference is defined relative to a particular mind.
> It is not an ontologically intrinsic property common to all
> sufficiently intelligent beings.
> Don't ask me! It's you that is trying to peddle snake oil to the AI
> and convince it that human decisions are "wiser" than it's own and so
> it should always obey humans.
It seems to me that you are thinking of "wisdom" and "absurdity" as
_intrinsic_ properties of statements, rather than properties of the
minds that form opinions on those statements.
Consider the statement "I should go flirt with that dude." Assume for
the sake of argument that the dude in question is generally fit as a
mate, and that the thinker is attractive to him.
Whether the statement is a good idea (wise) or a bad idea (absurd)
depends largely on the sexual orientation of the thinker -- a property
of the mind. Different minds have different desires, different goals,
and the wisdom or absurdity of a statement is defined with respect to
To a paperclip-maximizer, the statement "I should destroy these three
paperclips to save those ten thousand human babies" is absurd. To a
human, the same statement is wise. Neither one is _intrinsically_ wrong.
Did you read the article I linked to?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT