From: Samantha Atkins (samantha@objectent.com)
Date: Mon Mar 15 2004 - 22:49:46 MST
On Mar 15, 2004, at 10:16 AM, Michael Anissimov wrote:
>
> "In 1993, a writer named Vernor Vinge gave a talk to NASA
> <http://singularity.manilasites.com/stories/storyReader$35>, in which
> he described the architecture of an event he called the “Singularity”,
> which is identical in every feature to McKenna’s Eschaton."
>
> it makes me think that they're talking about the same thing. The
> Singularity is radically different than McKenna's Eschaton. When
> people like Pesce say things like "the bios might not be prepared for
> the emergence of the logos", I tend to visualize them visualizing some
> sort of societal/psychic chaos rather than outright destruction at the
> hands of a paperclip SI. I might be wrong about this, but it's the
> impression I've gotten from reading some of McKenna's articles. Humans
> are fragile creatures; we require a very precise environment to
> survive, and in the absence of an SI that specifically cares about
> preserving it, we would probably be swept aside (read: extinct) by the
> first SI(s) with goals that meant the rearrangement of matter on any
> appreciable scale. "Care for humans" is not a quality we should expect
> to emerge in an arbitrary SI of a certain level of intelligence.
There will be plenty of societal/psychic chaos in even a Friendly SI
scenario. The logos is the Word, the primary Intelligence. The bios,
humanity as evolved, will not be prepared. This is deeper than
societal/psychic chaos. It is a fundamental shift to a state we are
incapable of imagining.
>
> When you say "will this new emergent complexity be friendly in any
> sense we can fathom, or will we be consumed or destroyed by it?", I
> think the answer lies in the initial motivations and goal structure we
> instill within the first seed AI. As Nick Bostrom writes in
> http://www.nickbostrom.com/ethics/ai.html,
>
> "The option to defer many decisions to the superintelligence does not
> mean that we can afford to be complacent in how we construct the
> superintelligence. On the contrary, the setting up of initial
> conditions, and in particular the selection of a top-level goal for
> the superintelligence, is of the utmost importance. Our entire future
> may hinge on how we solve these problems."
>
> The *first* seed AI seems to be especially important because it would
> likely have cognitive hardware advantages that allow it to bootstrap
> to superintelligence before anyone or anything else. This means that
> the entire human race will be at the mercy of whatever goal system or
> philosophy this first seed AI has after many iterations of recursive
> self-improvement. The information pattern that determines the fate of
> humanity after the Singularity is not be within us as individuals, or
> predetermined by meta-evolution, or encoded into the Timewave; it will
> be in the source code of the first recursive self-improver. If some
> idiot walks into the AI lab just as hard takeoff is about to commence,
> and spills coffee on the AI's mainframe, driving it a bit nutty, then
> the whole of humanity might be destroyed by that tiny mistake. Also,
> novel events prior to the Singularity are liable to have negligible
> impact upon it. If someone has a really great trip where they
> visualize all sorts of wonderful worlds, shapes, and entities, it will
> have absolutely no impact on whether humanity survives the
> Singularity. I have a feeling that Pesce and others would be turned
> off by this interpretation of the Singularity because it is so
> impersonal and arbitrary-seeming.
>
I am still not convinced that the "source code" of the first recursive
self-improver is in any meaningful sense inviolate and thus capable of
protecting us. In the end the SI will either consider humans as
interesting sentients that its chosen morality leads it to protect and
preserve or it will not. I find it odd that we who find it so
difficult to devise a morality for ourselves or to write more than the
bare seeds of an SI would believe that we can determine the top level
goal and thus the root of SI morality indefinitely. I think it would
be more realistic to decide whether a vastly less bounded intelligence
is such a good thing in our humble opinion that we are willing to risk
everything on its creation even if it means our doom. This is not a
pleasant prospect but I believe it is more realistic.
Humans can barely predict whether a simple word processor will function
as it requirements say it should. How amusing that we thing we can
predictably bound that which will think millions of times faster than
ourselves
> So when Pesce says stuff like,
>
> "So we have three waves, biological, linguistic, and
> technological, which are rapidly moving to
> concrescence, and on their way, as they interact,
> produce such a tsunami of novelty as has never before
> been experienced in the history of this planet."
>
> or
>
> "Anything you see, anywhere, animate, or inanimate, will have within it
> the capacity to be entirely transformed by a rearrangement of its atoms
> into another form, a form which obeys the dictates of linguistic
> intent."
>
> it makes me feel like he has a false sense of hope, that the
> Singularity is more about embarking on a successful diplomatic
> relationship with the self-transforming machine elves, rather than
> solving a highly technical issue involving the design of AI goal
> systems. I doubt that Pesce realizes the forces responsible for the
> rise of complexity and novelty in human society correspond to an
> immensely sophisticated set of cognitive tools unique to Homo sapiens,
> not to any underlying feature of the universe. Fail to pass these
> tools onto the next stage, and the next stage will fail to carry on
> the tradition of increasing novelty.
>
Since the SAI will recursively self-improve all aspects of itself I
would find it remarkable if it never bothered to critically examine its
own goal systems, especially considering the fallible beings which laid
down those goals. Whether after examination it ends up with an
effective goal system that preserves humans or not is really anyone's
guess. My guess is that any sufficiently intelligent being will
sooner or later come to greatly value benevolent co-existence with
other reason-capable beings even with great differences in ability.
But I do not know if it will come to that state soon enough for
humanity to survive. Sometimes I believe we can plant the right seeds
to make it likely. Sometimes I am not so sure.
> The vast majority of biological complexity on this planet will be
> irrelevant to the initial Singularity event, because it will play no
> part in building the first seed AI, except insofar as it indirectly
> gave rise to humanity. Linguistic; irrelevant except insofar as the
> language the first AGI designers are using to plan their design and
> launch.
Linguistic in the sense used by McKenna refers to a lot more than just
actual language.
> Technological; also, only a small portion of the technological
> complexity on our planet today will be used to create transhuman
> intelligence. The *simplest constructable AIs* are likely to have
> correspondingly simple goal systems; so the *easiest* AIs to launch
> into recursive self-improvement are also likely to be the ones
> bringing on the most boring arrangements of matter, such as multitudes
> of paper clips. Simple, boring, cruel, easy.
By the same logic the earth should have nothing but one-celled
creatures on it! Seriously, the likelihood of a paper-clip AI taking
over the universe is nearly non-existent.
> Given a *benevolent* Singularity, yes, biological, linguistic and
> technological forces might indeed intertwine with one another and
> produce a "tsunami of novelty" in much the way that he describes, but
> it seems to be that he's regarding this tsunami of novelty as
> basically coming for free. "Novelty", in the sense that Terence
> McKenna uses it, has an unambiguously positive connotation.
>
Hardly for free as he seems to call for the transformation of ourselves
in order to give birth to this possibility.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT