From: Lee Corbin (lcorbin@rawbw.com)
Date: Thu Mar 13 2008 - 07:47:54 MDT
Norman wrote
> From: "Norman Wilson" <nwilson@programmar.com>
> Sent: Monday, March 10, 2008 7:40 AM
>
> Lee Corbin writes
>
>> Also, it's vital because we can *discuss* whatever is objective
>> You and I may discuss a distant nebula, and refer to it as a
>> thing outside our skins, refer to it without any mention of our
>> subjective impressions.
>
> We can do this because we both learned the same symbols for mapping our
> subjective experiences to the objective world. When you use the word "red",
> I understand what you mean because it maps to my own subjective experience
> of red. Via language acquistion, we've already implicitly agreed on these
> mappings and don't have to renegotiate them each time we utter a statement.
Yes. The process of adopting conventions and aligning them to objective
characteristics isn't so easy to describe, but I sense that we completely agree
here.
> We can also discuss subjective experience, as evidenced by this thread. In
> the same way that I cannot directly convey my experience of "red" to you, I
> cannot directly convey my subjective experience of "subjective experience".
> I rely on the assumption that youf experience something similar, and through
> context and clarification you can infer what I'm talking about.
Beyond the simple claim that it exists (which is true), what intellectual good
has ever come from discussing subjective experiences? Has it ever added
anything to our knowledge? Well---sure---some brain investigators have
been inspired from their own subjective experiences, but in Pan Critical
Rationalism useful inspiration may come from anywhere, even your local
witch doctor.
>> "Is my red the same as your red?"
>
> For the purpose of communications, I suppose it doesn't matter. When you
> say "red", I know what you're talking about (that is, it consistently
> invokes the same subjective experience), and that's what allows us to
> communicate.
>
>> [Talk of subjective experience] also vastly add to the difficulties
>> in discussing consciousness and trying to get a handle on it.
>
> True, but we need a better reason than that to dispense with it.
Addressed above.
>> To think of "the big C" as an objective phenomenon characteristic
>> of some objects/entities---and not characteristic (or only very
>> marginally characteristic) of others---makes possible scientific
>> progress towards understanding.
>
> Absolutely.
Good.
>> But discussions of the subjective nature of consciousness
>> lead utterly nowhere.
>
>From an armchair philosophy standpoint, that might be true. While this
> issue may not be resolvable by our meat-brains, I propose that it's relevant
> with regards to Friendly AI. In fact, it could even be a key element of
> friendliness.
Certainly AIs wishing to emulate humans must of course take this
into account. But that can be done, I submit, by following an
entirely functional approach. Making a human that acts like a
human, quacks like a human, etc., will be good enough, because
except for grotesquely elaborate TEs involving computronium and
GLUTs, functionalism works just fine as a criterion.
> Certainly, it's possible (in principle) that our mental constructs could be
> simulated to the extent they are objectively indistinguishable from the
> originals, while at the same time eliminating the most important part of
> subjective experience. (I.e., turning people into zombies). We may
> disagree on the likelihood of this, but no one can conclusively rule out the
> possibility. Similarly, one might argue, we can't rule out the possibility
> of gremlins or invisible elephants in the middle of the room, but there's an
> important difference in these arguments.
>
> We have to find a reasonable balance between risks and consequences.
> If the AI is wrong about the existence of gremlins or invisible elephants
> in the middle of the room, well, so what... However, if it's wrong about
> subjective experience and suddenly all of the lights in the universe go out,
> replaced by highly detailed but dark simulations of lights, what a horrible
> shame that would be. The risk is so unimaginably great that it's worth
> serious consideration, even if we believe it to be unlikely.
Sure. But I'll bet that a seriously >H entity won't have any trouble
subscribing to practical functionalism.
> While I wouldn't want to stubbornly refuse new technologies, such as
> teleportation, radical brain-tissue replacement, or even "uploading", which
> could improve or extend human life, I also don't want to vanish into
> oblivion while some doppelganger runs around claiming to be me. For me,
> this is a real concern and the jury is still out.
People have been coming up with great scenarios to help us double-check
that a functionally successful upload---or robots subsequently downloaded
from its code---will leave us no excuse to think that they're not "conscious"
or have no "subjective experience", any more than it is conceivable that,
say, black people aren't conscious.
Lee
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT