Re: Different View of IA

From: Will Pearson (
Date: Wed Apr 24 2002 - 03:27:45 MDT

> It may be that the "sysop" meme, or more properly the "intelligent
> substrate"/"singleton" family of memes, is killing off other ideas simply
> because more work has gone into visualizing intelligent substrate
> scenarios. Hopefully this is not the case and the intelligent substrate
> scenarios are winning because they are, in fact, the best ideas discussed so
> far on SL4. If not, eventually someone will come along with an idea that,
> being stronger, kills and devours the intelligent substrate scenarios.

If the intelligent substrate scenario is the only one fed on this list, which it seems to be then it will become very hard to destroy and may not be worth the other memes effort.

I am not the most brilliant writer. I find the best way to convince someone is to find out where the peoples deep beliefs differ then start off from my those deep beliefs and work up from them.

So to start off, the belief that human society could not take a form that would prevent abuses of nanotechnology. If this is not your belief I apologise.

So to let other people do the talking, I recently found an article on wired by david brin about how society could regulate itself by everybody basically being able to spy upon each other. Ramp up the technology a bit with prehuman intelligences helping each human do the spying. And you could get a stable society.

This is also similar to the sous-veillance meme of Steve Mann one of the big people in wearable computing.

Now let us say that the first nano-tech disaster isn't the all conquering grey goo. It could require a certain element that is rare or whatever, or have some weakness. This event would be very bad, but not bad enough to destroy humanity. Humanity would now be very aware of the danger. We have managed to live for 30 years with technology that if it got into the wrong hands could destroy the earth, nuclear, and it hasn't yet.

We'll bash this one out before moving onto other memes. As we don't know anything about what will happen when an AI becomes transhuman, arguments that more lives will be saved if we go the AI route cut no mustard with me.

Side note. Has there been any thought about going towards a standard units way of mesuring intelligence. For example a Super Intelligence could be a Mega-intellignce. Meaning 1 million times more intelligent that the average human being. Then a whole new world of Exa-intelligences etc would be available, prehuman could be deci-intelligent or centi :) Or is this just a silly idea.

I will have to read Vinge, you can't get him on the high street over here, although you can get Zindell. Strange isn't it.

Note that I am not really interested in converting the list entirely, I would just like enough space different Memes on this list.


Sign-up for your own FREE Personalized E-mail at

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT