From: Russell Wallace (email@example.com)
Date: Fri May 20 2005 - 04:39:25 MDT
On 5/20/05, Michael Wilson <firstname.lastname@example.org> wrote:
> I've had someone seriously propose to me that we use limited AI to
> rapidly develop nanotech, which would then be used to take over the world
> and shut down all other AI/nanotech/biotech projects to prevent anything
> bad from happening (things got rather hazy after that). I don't worry
> about it because 'highly restricted human-level AGI' is very, very hard
> and ultimately pointless (if you know how to make a 'human-level AGI'
> controllable, then you know how to make a transhuman AGI controllable).
> People less convinced about hard takeoff will doubtless be more concerned
> about this sort of thing.
Heh. Yeah, I haven't seen many actual proposals to do it - but I have
seen a lot of technical folk who clearly want to believe it can be
I'm not convinced about hard takeoff... I also think it's unlikely
that someone whose thought processes haven't risen above that level
will be able to create anything beyond smart-weapon or paperclip AI at
Still, I think I'll toss in my two cents' worth here, not because I
believe Ben or the SIAI are in it for power, but because SL4 is a
public archived list and the above line of thought might look just a
little too tempting for some:
1. Wannabe world-conquerors should first read Eliezer's comments on
the difficulty of creating superintelligent AI that doesn't just turn
you into grey goo. Then reread them until you understand them.
2. Taking over the world, even with ultratechnology, would be a far
harder and more dangerous task than a lot of technical people seem to
realize. (Please note: this is not a proposal to discuss ideas for it!
3. A failed attempt would not only be likely to kill the mad
scientist, it could very well be the start of a chain of events that
led to the extinction of life on Earth. (Scenario 1: half a dozen
nations, having seen the live demo of AI and nanotechnology,
immediately start Manhattan Projects. Etc. Scenario 2: for fear of
scenario 1, a world government is set up to suppress development of
dangerous technology. Etc.)
So if one wants power, politics is a more rewarding field of activity
than AI research. It really is.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT