From: Brian Atkins (brian@posthuman.com)
Date: Wed Apr 25 2001 - 15:29:37 MDT
"Christian L." wrote:
>
> Brian Atkins wrote:
> >Damien, let's face it- Ray has tons more "media power" comparatively.
> >Maybe it's all those prizes he keeps winning, or something else, but when
> >he publishes that book I guarantee you there will be some sort of media
> >frenzy.
>
> Are you sure about this? "Spiritual Machines" was (and still is) pretty
> radical to the general public (high end SL3), and I didn't see any coverage
> from the mainstream media about it. There was one small editorial in the
> "culture" section of "Svenska Dagbladet", one of Sweden's largest papers,
> but that was it. I didn't see anything on CNN or BBC (maybe I missed it) so
> there was hardly any "frenzy" back then at least. The Bill Joy debacle went
> by pretty quietly too as I recall (I think the focus was mainly on Biotech
> stuff). I think that if there is media frenzy about Ray's forthcoming book,
> there should have been more press about SM as well (or has he more "media
> power" now than then?).
Well perhaps because I am more interested in these subjects it seemed like
it got a lot of media attention. CNN did do at least some coverage, see:
http://www.cnn.com/2000/TECH/computing/07/06/future.views.idg/index.html
Anyway, it got a lot more attention than The Spike, and I doubt anyone will
dispute that. And yes I think Ray's "star power" has just gone up more since
he won that Lemelson prize this week.
>
> There is still debate (in academia and elsewhere) about whether or not
> *human-level* AI is even possible, so I think that most people would
> consider SIs and Powers too fantastical to take seriously. The media may not
> want to stick their necks out with such a fantastic claim, especially since
> many respectable AI-researchers think that the whole idea is ludicrous.
> (Some 30 years ago, the above mentioned newspaper blasted on the front page
> that perpetuum mobile-machines was going to solve the energy crisis... quite
> some laughter in academia later...)
Well the problem as Ray notes is that most people seem to have a lot of
trouble fully internalizing what the future holds. If you had predicted
to the average scientist 30 years ago about some of the things we have
today, they would likely have also immediately had a knee-jerk negative
reaction. Many scientists unfortunately operate at a very low shock-level.
And even if they do start to believe it, and take it seriously, they
(whether scientist or journalist) may feel unwilling to discuss it in
public for fear of taking heat from colleagues. Unfortunately, even so-
called "cutting edge" organizations like Wired seem to suffer more and
more from this conservatism. You can see it in that article on Wired News
today about Ray- the references to Frankenstein and other bits of it show
the inability or unwillingness of the writer to take the subject completely
seriously.
>
> I saw a thing about Ray's ideas on a german science-program ("Nano" on
> 3sat). It discussed uploading and AI, and had interviews with Ray. In the
> end, there was some professor of computer science as a studio guest. He
> simply said that AI was impossible because human brains are analog and
> computers are digital. Period. People like these always seem to get the last
> word, in printed press as well as television. This could be why there hasn't
> been any debates about these things; when people watch it on TV, they're
> basically told that it is nothing to worry about.
Yeah it's rather annoying- the writer/producer of the show/article always
wants to get other opinions. Unfortunately at the current time there are
very very few people on the planet who know enough (and have internalized
it well enough) to be able to really talk about these subjects. So usually
what you get is someone who has one of the two typical reactions: either
deny that it can happen (so it's not worth worrying about), or state there
is nothing we can do about it (so it's not worth worrying about). And the
writers/producers are not experts on the subject either so they accept
these responses at face value, and even tend to give them more credence
since they subconsciously line up with their own personal reactions.
The end result is that it is hard to really get across SL4 ideas to people
that are lower. So the goal I think should be to get people /up to SL4/
rather than trying to immediately start talking SL4 stuff and expect them
to have meaningful responses. Books like Ray's and Damien's help in this
education process. As time goes on more and more people out there will get
it... we are seeing a similar thing happening nowadays with the more SL3
and lower transhumanist ideas like genetic engineering. People slowly get
used to it. This process is accelerating some, so these people are now
getting hit with SL4 memes... but so far not much is sticking. It's just
too radical for most people that the world may effectively be totally
changed in 30 years (or less we hope). I have to say, all in all, that
I hope these ideas stay out of the masses minds for a long while. I would
much prefer to get a few very rich people onboard who grok it all, let
us create our Singularity in quiet, without a big public ordeal.
>
> I believe this was the case with cloning: before Dolly, noone thought it
> could even be done, and there was no debate whatsoever. After Dolly: well
> you know...
>
> Will there be a similar scenario with AI/SI? Only time will tell.
Well the nice thing is that after you get your AI working, there likely
won't be much time before it gets beyond the point of "debating" about it.
It'll certainly be nice to have a technology that is fully capable of
taking care of itself. So at least that's one thing we won't have to worry
about :-)
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT