From: Toby Weston (lordlobster@yahoo.com)
Date: Thu Mar 05 2009 - 00:44:33 MST
I would not agree that logarithm tables know anything. Apart from that I agree with you.
I don't know what consciousness is, I might not even know it when I see it. I do think that it is important though, and I don't think pretending it does not exist (as some schools of philosophy seem to) helps.
On 5 Mar 2009, at 01:28, Charles Hixson <charleshixsn@earthlink.net> wrote:
I understand that you claim to be conscious, but that doesn't tell me what you are claiming.
I don't really think that you believe that "seeming to know things that you don't know" is a definition of a separate conscious entity. That would imply that a table of logarithms was conscious.
Looking before you leap is quite reasonable, but looking implies searching for some set of features or characteristics. What you would find satisfactory is quite opaque.
You are looking at it as "Would an upload be conscious?", but I'm looking at a simpler(?) problem of "What is required for an AI to be conscious?" Your problem requires not only that the system be conscious, but that it have the same consciousness (i.e., be the same individual) as the original.
Toby Weston wrote:
I am conscious. It is possible that those people out there may be part of me. But they seem to know things I don't know, so I decide they are are like me. One day one of them builds a machine that he says will let me live for ever. That he is conscious is a leap, that the machine he builds can create consciousness is another leap.
I will look before I leap.
On 4 Mar 2009, at 21:51, Charles Hixson <charleshixsn@earthlink.net> wrote:
Could you define the "consciousness" that you are worrying about? Is there any test for it? If not, how do I know that *you* are conscious, and not just pretending? And if there isn't a definition that we can both agree on, isn't the word just a meaningless noise? If not, why not?
Toby Weston wrote:
If there is a detectable difference, then there is no reason that we couldn't create a machine that simulates you and passes whatever test for consciousness you propose.
Ok.
If there is no detectable difference, then there is no sense in asking the question.
Not ok. This Dennet, Blckmore take on consciousness is lacking in that it does't help me understand the origin of subjective experience, whether my motives are purely philosophical or engineering oriented.
In fact the statement that conciousness does not add anything is itself pointless: it helps understanding not one bit.
e.g.
A super inteligent, but not conscious, AI recieves an upload. The company that owns the AI demos the newly uploaded person to it's family. It passes all sorts of tests - Turing style and White box - the audience concluded that this really is uncle Bert. It does not have a body, it is not a zombie, it is a set of responses that has fooled the room. It may require 10x the processing power of a human brain to run. In this example consciousness is a performance optimization allowing complex behaviour to run on smaller hardware/wetware.
Uncle Bert is not conscious - this is not irrelevant for people considering the procedure.
(sorry no spell check)
I think your real question is "will I die?" which is understandable since we are programmed to fear death.
-- Matt Mahoney, matmahoney@yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT