From: Chris Hibbert (firstname.lastname@example.org)
Date: Sun Oct 21 2007 - 12:30:51 MDT
>> > Evolution has little to do with what we "ought" to do.
> Yes it does. Ethics is a function of your upbringing. Cultures that teach
> respect for other humans in your cultural group are more successful than those
> that don't.
Hmmm. I was talking about genetic evolution. You seem to be talking
about memetic evolution. Even so, you're talking about what choices
lead to more progeny for a particular culture, rather than about ethics.
I agree that success is useful and important. But it isn't the
standard basis for ethics. I'm willing to concede that success is
usually consistent with ethics, but I don't think it's the right yardstick.
>> > My theory has long been that self-awareness is the level that I would
>> > demand from any creature before granting it an equal "right to live".
>> > Others seem to rely on ability to feel pain, or something even less.
> How do you test for self awareness?
I don't have a scientific test. I'm willing to argue about individual
cases (though not ad infinitum) and follow general rules. Ceding rights
to infants and comatose patients is one of those general rules.
>>>>> Also, do you presume there is a test to distinguish a machine
>>>>> that can feel pain from one that only claims to feel pain?
>> > If the claim seems real, and not recorded, I'm willing to count that.
> What makes a claim seem real? If you can trace the neural pathways in my
> brain when I say "ouch" and conclude that my response was computable, is the
> pain still real? If a program says "ouch" but the code is too complex for you
> to understand, is its pain real?
I understand that we are meat computers, so proving that parts of our
response are algorithmic doesn't undercut our sophistication. If a
program says "ouch", but doesn't produce any other grammatical
utterances, then the sophistication of the code doesn't matter to me.
If a program can pass the Turing test, but doesn't have opinions that
outlast the session, then it hasn't convinced me. (The Turing test is
interesting, but it's not sufficient.)
> If a program changes its behavior to avoid a negative reinforcement
> signal, as the program below does, does it experience pain? If not,
> then what test does this program fail?
I said earlier that my default standard is self-awareness, and that
others might believe awareness of pain would be sufficient. You'll have
to find someone who believes in the latter standard to argue about the
status of your program. To me, it's too simple to count.
-- I think that, for babies, every day is first love in Paris. Every wobbly step is skydiving, every game of hide and seek is Einstein in 1905.--Alison Gopnik (http://edge.org/q2005/q05_9.html#gopnik) Chris Hibbert email@example.com Blog: http://pancrit.org
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT