From: Norman Wilson (nwilson@programmar.com)
Date: Mon Mar 10 2008 - 08:40:34 MDT
Lee Corbin writes
> Also, it's vital because we can *discuss* whatever is objective
> You and I may discuss a distant nebula, and refer to it as a
> thing outside our skins, refer to it without any mention of our
> subjective impressions.
We can do this because we both learned the same symbols for mapping our
subjective experiences to the objective world. When you use the word "red",
I understand what you mean because it maps to my own subjective experience
of red. Via language acquistion, we've already implicitly agreed on these
mappings and don't have to renegotiate them each time we utter a statement.
We can also discuss subjective experience, as evidenced by this thread. In
the same way that I cannot directly convey my experience of "red" to you, I
cannot directly convey my subjective experience of "subjective experience".
I rely on the assumption that you experience something similar, and through
context and clarification you can infer what I'm talking about.
> "Is my red the same as your red?"
For the purpose of communications, I suppose it doesn't matter. When you
say "red", I know what you're talking about (that is, it consistently
invokes the same subjective experience), and that's what allows us to
communicate.
> It also vastly adds to the difficultiess in discussing
> consciousness and trying to get a handle on it.
True, but we need a better reason than that to dispense with it.
> To think of "the big C" as an objective phenomenon characteristic
> of some objects/entities---and not characteristic (or only very
> marginally characteristic) of others---makes possible scientific
> progress towards understanding.
Absolutely.
> But discussions of the subjective nature of consciousness
> lead utterly nowhere.
>From an armchair philosophy standpoint, that might be true. While this
issue may not be resolvable by our meat-brains, I propose that it's relevant
with regards to Friendly AI. In fact, it could even be a key element of
friendliness.
Certainly, it's possible (in principle) that our mental constructs could be
simulated to the extent they are objectively indistinguishable from the
originals, while at the same time eliminating the most important part of
subjective experience. (I.e., turning people into zombies). We may
disagree on the likelihood of this, but no one can conclusively rule out the
possibility. Similarly, one might argue, we can't rule out the possibility
of gremlins or invisible elephants in the middle of the room, but there's an
important difference in these arguments.
We have to find a reasonable balance between risks and consequences. If the
AI is wrong about the existence of gremlins or invisible elephants in the
middle of the room, well, so what... However, if it's wrong about
subjective experience and suddenly all of the lights in the universe go out,
replaced by highly detailed but dark simulations of lights, what a horrible
shame that would be. The risk is so unimaginably great that it's worth
serious consideration, even if we believe it to be unlikely.
While I wouldn't want to stubbornly refuse new technologies, such as
teleportation, radical brain-tissue replacement, or even "uploading", which
could improve or extend human life, I also don't want to vanish into
oblivion while some doppelganger runs around claiming to be me. For me,
this is a real concern and the jury is still out.
Norm
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT