Re: De-Anthropomorphizing SL3 to SL4.

From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Wed Mar 17 2004 - 16:28:26 MST


Samantha Atkins <samantha@objectent.com> wrote:

On Mar 15, 2004, at 10:16 AM, Michael Anissimov wrote:
>
> "In 1993, a writer named Vernor Vinge gave a talk to NASA
> , in which
> he described the architecture of an event he called the “Singularity”,
> which is identical in every feature to McKenna’s Eschaton."
>
> it makes me think that they're talking about the same thing. The
> Singularity is radically different than McKenna's Eschaton.>

I regard the Singularity as the embodiment of McKenna's Eschaton, which is not even the first hint of this phenomenon. Robert Anton Wilson pointed toward this future event more than twenty years ago in books such as Cosmic Trigger Vol. 1 (The Final Secret of the Illuminati). He called it the Jumping Jesus Phenomenon because a statistician named Anderla had noted that recorded knowledge had doubled many times since the time of Jesus and that the doublings were asymptotically approaching a spike only a few decades away. So when I read about Vinge a couple of years ago, I thought "The techies have finally caught up with RAW."

> When you say "will this new emergent complexity be friendly in any
> sense we can fathom, or will we be consumed or destroyed by it?", I
> think the answer lies in the initial motivations and goal structure we
> instill within the first seed AI.

Which is why I think an enhanced human of known intelligence and high integrity would make a good seed AI. Perhaps Torvalds would consent to eat a large pile of RAM chips.

> "So we have three waves, biological, linguistic, and
> technological, which are rapidly moving to
> concrescence, and on their way, as they interact,
> produce such a tsunami of novelty as has never before
> been experienced in the history of this planet."
>

Reads like an outtake from Cosmic Trigger.

>Sometimes I believe we can plant the right seeds
>to make it likely. Sometimes I am not so sure.

Sir Martin Rees, Britain's Astronomer Royal, only gives us a 50% chance of survival, which seems about right to me. One of my back issues of Discover has an article, 20 Ways the World Could End Soon (the human world, at least.) Mean AI is only one of them; others include nukes, germs, collapse of the vacuum, and It Was All A Dream ( ! ) Since, as the moderator himself has noted, we unenhanced humans are on a bad career path already, we are speaking of a dangerous yet necessary gamble.

>Seriously, the likelihood of a paper-clip AI taking
>over the universe is nearly non-existent.
It's interesting that encephalization has been on a steep upward curve since the Cambrian Period. Brains just keep getting bigger. That's some cause for comfort.

Tom Buckner

Do you Yahoo!?
Yahoo! Mail - More reliable, more storage, less spam



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT