Date: Tue Nov 21 2000 - 08:57:53 MST
I am not sure that Ben's shocklevel is Not more profound at SL2, then it is
at SL3. If a culture has already experienced SL-2, then SL-3 is merely an
extension. If the Galaxy is discovered to be, mostly non-biotic and
non-intelligent; then what inpact is star-travel? Intelligent aliens created
out of petri dishes seem much more dramatic to me, because they become the
human species' Mind Children; to quote Moravec and Minsky. Opinions?
In a message dated Tue, 21 Nov 2000 10:07:25 AM Eastern Standard Time, "Ben
Goertzel" <email@example.com> writes:
What follows are some moderately disorganized thoughts -- take them or
delete them as you wish...
OK, so if we take Eliezer's definitions
SL0: The legendary average person is comfortable with modern technology -
not so much the frontiers of modern technology, but the technology used in
everyday life. Most people, TV anchors, journalists, politicians.
SL1: Virtual reality, living to be a hundred, "The Road Ahead", "To Renew
America", "Future Shock", the frontiers of modern technology as seen by
Wired magazine. Scientists, novelty-seekers, early-adopters, programmers,
SL2: Medical immortality, interplanetary exploration, major genetic
engineering, and new ("alien") cultures. The average SF fan.
SL3: Nanotechnology, human-equivalent AI, minor intelligence enhancement,
uploading, total body revision, intergalactic exploration. Extropians and
SL4: The Singularity, Jupiter Brains, Powers, complete mental revision,
ultraintelligence, posthumanity, Alpha-Point computing, Apotheosis, the
total evaporation of "life as we know it." Singularitarians and not much
then, it occurs to me that the proper Zen-Buddhistic answer to the question
"What is SL5?" is:
~Everyday, pretechnological, embodied life~
To be slightly less enigmatic, it seems to me that "shock level" has to do
not only with one's technological exposure, but also (and perhaps
more so) with one's fundamental existential outlook.
To the person who has really come to grips with the [elusive,
unreal/real/semireal/surreal] nature of everyday life, none of these things
are shocking ...
Shocking-ness comes about when a mind has given a falsely solid reality to
something that really isn't all that solid or definite at all
.. and then finds out this falsely solid reality is indeed falsely solid...
What's wonderful about SL4 is that at this stage,
science and technology are finally subverting themselves -- sci. and tech.
are the ultimate manifestations
of the Western mindset that focuses on concrete, solid, over-reified
external reality, and at the SL4 technology stage, they'll be truly
subverting the notion of external reality.... (In case you're curious, I'm
currently visualizing the previous sentence being uttered
by the talking asshole in William Burroughs' Naked Lunch ;)
In terms of the ethical issues I raised earlier on this list: SL1-3 do
indeed correspond more easily with an elitist view in which only the top x%
the population get to partake in technological improvements. SL4 posits a
level of being at which the notion of "population" and
"individual" are no longer necessarily meaningful, moving into a phase where
current ethical concerns don't really have grounding --
ethics as we know it is based on a notion of the individual and society
which is ephemeral on the grand scale... the principle of
compassion is timeless, but its manifestations will vary with the epoch...
It's never really the technology that's shocking. Shock is always the same
thing... the shattering of provisional assumptions, which minds
need to make in order to cope with the lack of enough data to make definite
assertions. Making provisional assumptions and realizing all
the while that they're provisional is a big trick ... hard for us to master
.. will it be possible for other organisms, later on, to master
this trick consistently? If so then Eliezer is right and transhuman AI's
really will avoid insanity nearly all the time.
I'm almost converted! Not to libertarianism, mind you ... but back to the
more digital-utopian perspective on AI that I had a few years
But wait! not quite.... Hold on. It's always going to be MORE efficient
to hold a provisional assumption and forget that it's
provisional, on some level.... Given fixed resources, intelligence and
mental-health/enlightenment/inability-to-be-shocked will always
contradict each other. The question is, if the fixed resources are LARGE
ENOUGH, then perhaps this inevitable tradeoff will become
less of a significant factor than it is in the human mind....
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT