From: Randall Randall (randall@randallsquared.com)
Date: Sun Jun 13 2004 - 17:42:45 MDT
On Jun 5, 2004, at 8:27 AM, John K Clark wrote:
> On Fri, 4 Jun 2004 "Randall Randall" <randall@randallsquared.com> said:
>> Eliezer intends to build a very intelligent goal-oriented system
>> which is explicitly *not* a person.
>
> That’s Eliezer’s intention but he will never have a way to know if he
> was
> successful. At one time it was thought that human beings that had a lot
> of pigment in their skin were not really people so it was OK to enslave
> them. I believe assuming an intelligence, in fact an intelligence
> vastly
> superior to our own, is not a person because it is made of metal not
> meat
> is equally mistaken and will lead to similar tragic consequences.
Not only is no one arguing that the material of composition
is essential to consciousness, but I find it hard to imagine
that anyone on this list would believe that it is so. That
argument is a straw man; there is no need to refute it again.
> Now no doubt Eliezer will have a theory that makes him think that
> however
> intelligent the machine is it is not really conscious, but is his
> theory
> right? There is no way to know. Consciousness theories are a dime a
> dozen
> and there is no way to test any of them.
Yet. The hallmark of a good hypothesis will be that
it *is* testable. I don't have one, but I'm not willing
to insist that it's logically impossible that one exist.
>> Someone arguing against this idea, pre-calculator, might
>> very well point out that if a machine could
>> answer math questions, like a person, then it
>> must have mental states that correspond to the
>> intermediate steps of the calculation, and
>> having mental states presupposes personality.
>
> Yes, some might have argued that point, but as for me my rule of thumb
> is
> that if something acts like a person it probably is. I can’t prove it
> of
> course but it seems to work pretty well when I use it on my fellow meat
> creatures and I see no reason to modify it if the creature is made of
> something else.
Sure, and unless and until there is a working theory
of consciousness, that's a good rule. No argument there.
>> Before you once again argue that sentience
>> must either be unimportant or an automatic
>> feature of intelligence, you might want to
>> consider the possibility that general problem
>> solving, like (some forms of) math, can be
>> usefully separated from personality.
>
> If I was convinced that was true I would have to become a creationist,
> I’d have no alternative because I would have absolutely no way to
> explain
> how random mutation and natural selection produced at least one
> creature
> that had personality and consciousness, me.
I have no idea why that is your position.
How is this different from a statement that you
"would have absolutely no way to explain how random
mutation and natural selection produced at least one
creature that can metabolize alcohol"?
Without understanding the process spaces of
consciousness and intelligence, how can you
know how likely it is that a random intelligence
would be conscious? I see no reason to assert
that problem-solving ability must always be
associated with consciousness.
-- Randall Randall <randall@randallsquared.com> Property law should use #'EQ , not #'EQUAL .
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT