From: Randall Randall (randall@randallsquared.com)
Date: Tue Jun 15 2004 - 10:43:17 MDT
On Jun 15, 2004, at 11:24 AM, fudley wrote:
> On Tue, 15 Jun 2004 "Randall Randall" <randall@randallsquared.com>
> said:
>> This assumption, however, is on the same level as my
>> assumption that my car will start the next time I
>> get in it.
> No, you can test the assumption about the car starting but you can’t
> test the assumption about other people’s consciousness.
That's true. I'll have to come up with a
better analogy.
>> an AI which has no discernible shared structure with
>> the human brain except that both can do general
>> problem solving may very well not be conscious.
> As a practical matter when you meet fellow meat creatures you have no
> way of knowing what state the neurons in their brains are in, yet I’ll
> bet you think they’re conscious most of the time, when they’re not
> sleeping or dead that is, because they act that way. Hell, you’ve never
> even seen me but I’ll bet you think even I’m conscious.
I assume so. :)
> But let’s suppose you have a super brain scanning machine and you use
> Eliezer’s consciousness theory to analyze the results, much to your
> surprise it says you really are the only conscious being on the planet,
> what would you do? Would you start treating other people like dirt
> because they have no more feelings than a rock, or would you suspect
> that Eliezer’s theory is full of beans?
Certainly, since it would contradict things I already
know about shared structure, similar history of
assembly, etc. I have no way of knowing what state
their neurons are in at the moment, but I could
certainly check, in principle. It simply seems
incredibly unlikely, even if technically non-zero,
that everyone else on the planet isn't conscious
but still acts as though they are.
>> you're slipping in the unstated premise that
>> it has a goal regarding itself.
> No, it’s not unstated at all, its goal is to solve problems and having
> your actions limited by rules made by a creature with the brain the
> size
> of a flea is a problem.
Why? Please state your argument in terms that don't
implicitly rest on self-interest.
>>> remember what the “I” in AI stands for.
>> I think Eliezer was right to start using a different term.
[snip]
> Inventing a new three dollar word to replace “intelligence” is not a
> good sign that the mind of the writer is clear and is most certainly
> unkind to the reader
>
>> You seem to indicate that you would treat
>> an AI as if it were not conscious, if it
>> didn't act as though it
>> were. Is this the case?
> Yes certainly, in fact I wouldn’t even call it an AI, I just call it a
> A.
The defense rests.
[general problem solving ability and self-interest]
> Any creature without both will not get very far. I’m a AI with no self
> interest, Hmm, if I do that I’ll erase my memory and fry all my
> circuits, well there is noting bad in that so I’ll do it.
Exactly. It's hard to see how actively destroying
itself would accomplish any specific goal, but if
there's no explicit value attached to itself, then
it will consider itself only important insofar as
it is required to reach the current highest goal.
-- Randall Randall <randall@randallsquared.com> Property law should use #'EQ , not #'EQUAL .
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT