Theoretical question for the list: publicity?

From: Brian Atkins (brian@posthuman.com)
Date: Sat Apr 07 2001 - 11:33:12 MDT


To the list I ask: if for instance Wired magazine wanted to do a large
article about SIAI in the near future, complete with cover image of
Eliezer with a quote "This 21 year old cognitive scientist is building an
AI that will end the world as we know it" do you think that would accelerate
our plans or hurt them? Assume that besides talking about Eliezer and our
plans, it also presents FAI as the answer to Bill Joy's AI concerns.

That probably is not how we would want to come across in an important
article, but it might be how it would be presented by a magazine like Wired.

You have to consider on the plus side that more people find out about us, and
that we would likely be able to get more funding. On the downside, it might
start a public backlash of unknown proportions. Or it might not; most people
might do what they did re: Bill Joy and either say it won't happen, or that
there is nothing they can do about AI so at least we are the best ones to
do it.

"Eliezer S. Yudkowsky" wrote:
>
> James Higgins wrote:
> >
> > Well, actually, if the anti-Singularity memes start doing the talk show
> > circuit it could potentially bring a halt to the project. If the general
> > population was sufficiently enraged by the thought that this small, elitist
> > group wanted to destroy their future I could easily see the mob mentality
> > taking over. And it would be rather difficult to concentrate on
> > programming a seed AI while your being protested, attacked and, if their
> > upset enough, fire bombed.
>
> I have to agree with this. It wouldn't halt the advance of time, but it
> would effectively put an end to the possibility of the Singularity
> occurring "in a calm and orderly fashion". If the Singularity still
> occurred due to Friendly AI, it might be an underground network of
> programmers working through Freenet (civil disobedience), some incredibly
> sensible military researcher who pushed his project into the use of
> "Friendly AI" semantics (government exception), and so on. If not, things
> degenerate into more chaotic and catastrophic scenarios; nanotech followed
> by someone getting a few hours of time on a nanocomputer, the Internet
> reaching the point of supersaturation where spontaneous emergence is
> possible, nanotechnological warfare, and so on.
>

-- 
Brian Atkins
Director, Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT