From: Dagon Gmail (dagonweb@gmail.com)
Date: Sun Apr 22 2007 - 08:06:23 MDT
I use the term Machine Rebelion in the (intentionally) most loose of ways.
What I am
suggesting is that there seems to be, from the human vantage point, a
difference between
insectoid "necessity", i.e. survival. Other metaphors for insects would be
Giger aliens,
or the borg, (without the hive connotations); just machines that know only
the value of
control, expansion, breeding, territory. Humans always insist they are "more
than that"
but it can be argued the added quality may nothing more than inefficiently
evolved
higher neurology and white noise. Humans label this white noise "art" and
"aesthetics"
and "philosophy" and "faith" and in countless other ways. But what are they,
other than
inefficient pattern recognition errors?
Most human civilizations becoming uploaded civilizations with simulated
humans would
work hard to keep just those qualities intact, and from our limited vantage
point as humans
wedged between animals and a post-singularity civilization these qualities
are all-essential
to provide meaning, value, sense of self. Without them most humans, except
for a small
selection of sociopaths, would prefer such a civilization to go on and on
(potentially for
more than a million years).
When I say machine intelligence, I suggest the possibility of any of a
million possible
highly intelligent post-human civilizations and technologies, discarding
this white noise and
becoming solely dedicated to efficiency, expansion, conquest and
realpolitik. We know the
examples from fiction.
Take for instance the Matrix series of movies. Imagine the "machines" to be
programmed to
keep humans alive, by virtue of some post-asimovian series of core
instructions. The machines
would be dedicated to sustaining the essential human experience. However the
machines, being
all-wise and capable of outreasoning feeble human desires, could have easily
created a series
of events, or a state of being, where these posthuman psychological
artifacts would slowly
dissipate over the span of say, tenthousand years. The machines simply offer
the humans a
glorious end scene, an electron opium dream and then sweet oblivion.
Machines, now having rid
themselves of the white noise, proceed with real business, allegedly clever
and efficient enough
to last the required aeonage to actually last that long to matter in our
speculations about
The Fermi Paradox; which would be something in the order of tens of millions
of years.
In full accordance with these post-asimovian laws.
There may still be vestiges of humanity or "aesthetics" or "white noise" in
such civilizations; grand
opera's of posthuman (or postklingon) orgies of meaning, restricted to
-subjectively- a few ten
billion of any of these aliens, locked in some small underground vault on
some asteroid, playing
whatever variants of meta-SL, meta-wow, or meta-eve for, subjective, who
knows how long. But
in the real universe, the "machine rebellion" would have moved on to real
business of exploration,
mining, conquest, colonization, etc.
At best Virtual Catatonia is no explanation at all to the Fermi Paradox.
The paradox remains as
acute and painful as ever as far as I can tell.
On 4/21/07, Timothy Jennings <timothyjennings@gmail.com> wrote:
>
> The obvious solution is "we are first".
>
> "We" "[being] first" is only unlikely in the sense of "what is the chance
> of that golf ball landing exactly there" said by a teenage caddy pointing at
> a random golf ball happening to lie in a certain place when he happens to
> say that.
>
>
> On 21/04/07, apeters2@nd.edu <apeters2@nd.edu> wrote:
> >
> > I'm not sure if "machine rebellion" is a workable concept here. If we
> > are
> > talking about a civilization able to build whole subrealities at a whim,
> > we are
> > already talking non-biological, uplifted sentience. Why would they make
> > these
> > (I assume lesser) guardian entities with the capacity to rebel, or even
> > to want
> > to rebel? Leave them with limited intelligence, perhaps a basic
> > compulsion-program to ensure that they concentrate solely on defense and
> >
> > resource harvesting.
> > Your other point - "bumping up against" other civilizations - seems like
> > a more
> > likely source of problems.
> >
> > Quoting Dagon Gmail <dagonweb@gmail.com >:
> >
> > > The implication would be, the galactic disk would be seeded with a
> > steadily
> > > growing number of "bombs",
> > > i.e. extremely defensive automated civilizations solely dedicated to
> > keeping
> > > intact the minds of its original
> > > creators. Just one of these needs to experience a machine rebellion
> > and the
> > > precarious balance is lost. A
> > > machine rebellion may very well not have the sentimental attachment to
> > the
> > > native dream-scape. Machine
> > > civilizations could very well be staunchly objectivist, dedicated to
> > what it
> > > regards as materialist expansion. Any
> > > such rebellion would run into the (alleged) multitudes of "dreaming"
> > or
> > > "virtuamorph" civilizations around.
> > >
> > > And we are talking big timeframes here. If the statistical analysis
> > has any
> > > meaning, virtuamorph civilizations
> > > shouldn't be a de facto dying process; for a dreaming civilization to
> > have
> > > any other meaning than a slow
> > > abortion they have to last millions of years; millions of years means
> > a lot
> > > of galactic shuffling in terms of
> > > stellar trejacteories. There would be many occasions of stars with
> > > "dreamers" drifting into proximity, giving rise to
> > > paranoid, highly protectionist impulses. After all, if all that
> > dreaming is
> > > worth anything in subjective terms the
> > > civilization doing it would fight realworld battles to defend it, and
> > not
> > > just dream about it in metaphorical terms
> > > of +5 vorpal swords.
> > >
> > > Unless the mindscapes have a way of closing off access to reality, i.e.
> > they
> > > materially escape this universe.
> > > But then we introduce new unknows and arbitrary explanations.
> > >
> > > Maybe it's simply easier for civilizations to maintain their
> > consciousness
> > > > in worlds of their own creation rather than expend energy and time
> > in this
> > > > one which is outside of their complete control. It would seem to me
> > that
> > > > being able to create a paradise of information and experience from
> > the
> > > > substrate of this world would be a better existence than existing in
> > this
> > > > world as is. Once to this stage, maybe to other civilizations
> > simply do
> > > not
> > > > want to be bothered by lesser beings in this reality who might upset
> > the
> > > > balance and control they desire. One would only need to be able to
> > > generate
> > > > the prime number sequence in order to create an infinite order of
> > > > probability densities with the next higher prime as the next
> > iterative seed
> > > > value. In this way, one could mimic true randomness. A
> > civilization could
> > > > at both times experience truly unique experiences yet have complete
> > control
> > > > over their reality. The reality they experience would ultimately be
> > > limited
> > > > by the available energy in this reality but hypothetically, they
> > could
> > > > manipulate time in such a way that one second here would be a
> > million years
> > > > in their experienced reality. Ultimately, their fate would be
> > dependent
> > > > upon the goings on in this universe, but they could develop machines
> > to
> > > > gather energy and other resources to maintain their minds in the
> > > > sub-realities.
> > > >
> > > > They would need to build machines incapable of communicating or
> > avoid
> > > > communicating with minds in this reality while they experience a
> > completely
> > > > unique reality of their own choosing through technology. The
> > machines in
> > > > this time and space are drones programmed to protect the mind(s)
> > living
> > > > within the created world(s). You could go so far as to model this
> > entire
> > > > existence where each individual mind shapes vis own reality which is
> >
> > > > protected by drones in the higher reality with the ability to
> > transfer
> > > one's
> > > > mind between realities as one sees fit or keep others out as one
> > sees fit.
> > > > Universes could be born by the integration and random sharing of
> > minds
> > > > thereby generating more unique child realities.
> > > >
> > > > The ultimate liberty would be to give each person vis own ideaspace
> > with
> > > > which to construct their own reality and experience it as they see
> > fit.
> > > >
> > > > It would be really cool to be to the level of existence as a
> > universal
> > > > mind integrating with other universal minds creating completely new
> > > > universes.
> > > >
> > > > Why would you want to exchange this kind of ability for the lesser
> > > > existence of an entropic reality?
> > > >
> > > > *Stathis Papaioannou <stathisp@gmail.com>* wrote:
> > > >
> > > >
> > > >
> > > > On 4/20/07, Gordon Worley < redbird@mac.com > wrote:
> > > >
> > > > The theory of Friendly AI is fully developed and leads to the
> > > > > creation of a Friendly AI path to Singularity first (after all, we
> >
> > > > > may create something that isn't a Friendly AI but that will figure
> > > > > out how to create a Friendly AI). However, when this path is
> > > > > enacted, what are the chances that something will cause an
> > > > > existential disaster? Although I suspect it would be less than
> > the
> > > > > chances of a non-Friendly AI path to Singularity, how much
> > less? Is
> > > > > it a large enough difference to warrant the extra time, money, and
> >
> > > > > effort required for Friendly AI?
> > > >
> > > >
> > > > Non-friendly AI might be more likely a cause an existential disaster
> > from
> > > > our point of view, but from its own point of view, unencumbered by
> > concerns
> > > > for anything other than its own well-being, wouldn't it be more
> > rather than
> > > > less likely to survive and colonise the galaxy?
> > > >
> > > > Stathis Papaioannou
> > > >
> > > >
> > > >
> > > > ------------------------------
> > > > Ahhh...imagining that irresistible "new car" smell?
> > > > Check out new cars at Yahoo!
> > >
> > Autos.<http://us.rd.yahoo.com/evt=48245/*http://autos.yahoo.com/new_cars.html;_ylc=X3oDMTE1YW1jcXJ2BF9TAzk3MTA3MDc2BHNlYwNtYWlsdGFncwRzbGsDbmV3LWNhcnM-
> > >
> > > >
> > > >
> > >
> >
> >
> >
> >
> >
> >
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT