From: Michael D Miller (email@example.com)
Date: Wed Jan 07 2004 - 15:12:17 MST
Harvey Newstrom excellently pointed out exactly how horribly observer biased
this entire discussion has been by saying and elaborating on the thought:
> I think it is common for everybody to see the world in terms of what
> they know.
Please, take this very simple concept and apply it globally to my argument
>>> You don't have to take the crap nature secretes for you any longer.
>>> It's gonna be fun, good and long lasting, a care-free positive
>>> satisfying Happy existence, for everyone, forever, no compromise
>>> needed. This is not paradise, it's the status quo. AFTER, we start
>>> talking about positive qualia and orgasmium and music with 459
>>> octaves 500 voices, paintings with 2 billion different colors, sex
>>> in 98 dimensions... (sorry the monkey here just started talking
>>> about red bananas... I know we'll have better stuff after we are
Suppose the case where happiness is hardwired to some constant value, let's
say orgasmic. Maybe I should experiment with tantric meditation before
making any claims, but I have a hunch that prolonged exposure to positive
qualia only desensitizes us to given positive qualia. Bathing with a golden
rubber ducky gets old, and god forbid should you ever have to go back to
using a regular rubber ducky. Whenever we fix our happiness at any certain
point, you eliminate happiness completely.
Suppose the case where happiness is bounded. Whether happiness is bounded
between [neutral, happy] or [happy, orgasmic] doesn't really matter. Your
neutral point becomes your average happiness level. It's certainly feasible
to assume that if you are bounded between [happy, orgasmic] and you get
stuck at just plain happy for a while, you may become rather desperate to
become orgasmic again. Likewise, if you are bounded between [neutral, happy]
you may notice that after being happy for a long, long time with someone
you're passionately in love with, once the relationship is over and you have
to go back to neutral hurts in the heart, quite a bit in fact. If you
suggest taking that pain away as well, I maintain that you're just fixing
our emotions at a point and removing them.
Suppose the case I haven't covered, where happiness is both increasing and
unbounded. Oh no, don't you dare unhook me from that machine. And, I think
we all agreed that being hooked up to the Orgasmotron indefinitely would be
a "Bad Thing"(tm) for "The Good of Humanity"(tm).
I'm pretty sure that we've decided that we don't want our FAI to experience
Pleasure/Pain. Therefore, if we are ever to hope for transhumanity, we must
eliminate Pleasure and Pain from our own race. In fact, I liken Pleasure and
Pain to producing and excreting biological waste, both are necessities for
the development and survival of biological organisms as complex as
ourselves. Experiencing Pleasure and Pain does not make us human, it
certainly is a factor in our behavior, but may I suggest that it is
collectively the biggest thorn in our side as a sentient race.
Ben Goertzel writes:
>>> It may be the nature of the *human* mind that, if it is openly and
>>> emotionally engaged with the world, it is going to experience a certain
amount of negative emotions. I suspect this is the case. Zen masters
who have banished negative emotions from their mind are not a
counterexample, because they have in a sense stopped themselves from
emotionally engaging in the world (an interesting choice, but not
necessarily the "right" one).
Excellent point! When are we human, and when are we purely algorithmic
General Intelligence? How much do we wish to change ultimately when this
Singularity comes and we have the option to make ourselves exactly as our
FAI? Do we still want to paint and feel the need to sing? Do we still want
to cry? Do we still wish to have love and loss, rather than to never have
love at all?
Ben, I hope I've made it pretty clear that removal of physical aches and
pains, unfortunately under your own definition may very well destroy our
Ben Goertzel writes:
>>>But, if we engineered this property out of the human mind, would the
result still be "human"? That's a subjective question, but my tendency is
to answer "no"?
Morality has nothing to do with what are Right/Good and Wrong/Bad and has
everything to do with what is Societal Appropriate/Accepted and
Inappropriate/Unaccepted. We as noble transhumanists or singularitarians
need to be thinking a lot bigger. There can be no society! If we want to
make people live forever, we cannot preserve the burning passion for life
that is humanity today. Can you say "Hello Planetmind," Earthzakharov?
Am I wrong to think that a Bayesian Reasoning FAI will not think that
removing our observer-centric Pleasure/Plain Reasoning with a more rational
and mathematically sound Bayesian Reasoning decision core as the best for
humanity as well? Do we really believe that Pleasure/Pain Reasoning, which
is our biologically evolved method for decision making as humans, is
actually the absolutely most desirable for us?
Ps. META: First post! I really hope at least some of this was relevant...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT