From: Michael Anissimov (firstname.lastname@example.org)
Date: Thu May 02 2002 - 14:06:02 MDT
When we start making guesses about entities with perceptual schemata
more complex than our own, we run into inherent cognitive limits.
See "Staring Into the Singularity". If you've played the Game of Life
and its closely related cousins, you can see how even a minor change in
underlying rules of the behavior of the system can yield radical
emergent differences. So conversations regarding the ethics of
Transition Guides should not be confused with plotting out the course
of the Spike itself as we near the creation of AI. Outsiders will
never take us seriously if they think we are lumping our post-
Singularity predictions together with our pre-Singularity plans. I
think this is another fundamental difference between an 'Extropian' and
a 'Singularitarian' as well; we don't talk about the future so much as
we *make* the future right now.
As for my guess, which I readily admit is only based on personal ethics
and foundationless speculation, is that everything will immediately
change entirely, yet in a way we are comfortable with. A Power would
able to subtly play with the 'underlying rules of the game' that we
don't even notice in a skilled act of applied systems theory. I don't
think volition will be 'respected' in the strict reified sense we use
to describe it now, but I don't think we'll mind, or maybe we won't
even notice. I personally think that a benevolent, 'omnipotent'
normative altruistic sentience (not I don't say 'superintelligent',
because there's no way I can project that - I can only guess for an
idealized altruistic sentience) will seamlessly upload the entire
Earth, and let individual sentients transcend to higher levels of
existence if their own 'volition', if such a thing exists, permits.
But I don't think it will be a question of volition alone. A SI could
*anticipate* a human being's deepest desires and fullfill them before
the human being ever thought of them verself. Some people might want
this. I don't think you'll necessarily have to say "I want such and
such" out loud for a Sysop to know that you want something.
Fullfillment-of-desires type scenarios can lead to odd circumstances,
from the human perspective:
~Sysop subprogram manifests
in Michael's room as a NeoAtlantean
Angel: How may I help you?
Michael: *drops the vase he was holding.* FINALLY!
Michael: *CRASH* I want A, B, and C.
Angel: *gives Michael A, B, and C.* Are you satisfied? Scanning your
cognitive matrix now, I can see that the desire for Aa, Ba, and Ca are
manifesting. Extruxtrapolating* to what you will do after you get
those, I can see that you will want Ab, Bb, and Cb. Each subsequent
desire will subsume the others, so in a way you are wasting your time
with the earlier ones. But if your intelligence were fundamentally
enhanced, none of these other things would concern you, you would be
interested in D, E, and F.
Michael: Enhance my intelligence so it goes through the roof, then!
Angel: If I enhanced your intelligence using all my available
computing power, your identity would fragment and you would experience
a death forward. This is not compatible with your current desires.
Michael1: Oh dear...just set me up with a sharp upwards curve in
enhancement, supplemented by a complex hierarchial scheme of novel
problems and corresponding modified and enhanced emotional drives, in
Michael2: Screw my current desires! Full speed ahead!
I want to be an ArchPower!
*The Sysop anticipated that I would appreciate ver making up a new
word! Clever folk, those Sysop subprograms.
What's a Sysop Angel to do in a scenario like this? Who is right, what
is going on? Is Michael2 caving into his hidden desire to become
tribal chief? Does the Sysop have a responsibility to stop this? Is
Michael1 being a wuss? Is the only way that a Sysop can fullfill
people's desires is by continuously anticipating what they want to know
or do in any given situation? Will any of these concepts make sense
after the Singularity? If we are destroyed by nukes, we will never
know. If we are good Singularitarians, and think about AI and spread
the correct memes, we may one day figure it out.
I harbor a degree of panpsychist sentiment myself; and since a Power
would so rapidly transcend everything that had every previously
existed, I think the differences between rocks and humans will seem
like much much less. If there is a "magical threshold"
called "sentience", then a Power will cross another billion "magical
thresholds" in the first week of the Singularity. I believe a Power
would implement a "scalable morality model" - and everyone will agree
its fair. The presence of this "double standard" will be
indistinguishable to any one being - the moral scale will be so gradual
that no individual being will be able to communicate with another being
and feel like they are being "deprived of morality". What I'm speaking
of isn't "morality" in the human sense either - it's a sort of
cognitive architecture-transcending "morality" that we would probably
understand more readily as "delegation of resources". If you give a
Power an ExaExabyte of memory, ve will be able to make use of it. If
you give that to a static upload, it's pointless. So I think "Zones"
will be created in which more sophisticated beings live in more
compressed, complex, fluxacious (ah, yes, I made up a word. I don't
care.), intense environments, and more mild beings can live in "Fringe
Zones". A Power can fragment into a legion of mild beings, each one
going exploring in a novel, 'lesser-density' area of Fun Space, then
perhaps remerging at a later date. Who really knows, this is all
worthless, unimaginative, pathetic homo sapiens sapiens gibberish. All
I can do is hope the future shock of our idle speculations have a
positive memetic effect.
Free POP3/Web Email, File Manager, Calendar and Address Book
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT