From: Samantha Atkins (firstname.lastname@example.org)
Date: Sun Sep 17 2000 - 04:12:00 MDT
Josh Yotty wrote:
> Nuke the sucker. Make sure it isn't mobile. Riddle it with bullets. Melt it. EVAPORATE IT. Unleash nanobots. Include a remote fail-safe shutoff the AI can't modify. Don't give it access to nanotech. Make it human (or upgraded human) dependant in some way so it doesn't eradicate us.
> Or am I just not getting it? ^_^
Well... It looks like a pretty strong attack of xenophobia from here.
Do we need to fear the AI, especially a singularity class AI? I'm not
sure. Eliezer argues that those fears are unfounded. I am not yet
persuaded but I grant the possibility that the AI will be friendly at
least by the time it comes into human or greater intelligence. If it
isn't friendly I doubt we could successfully stop it in any case. So I
think we need to put quite a bit of work into doing what we can to
insure that the AI is friendly and trustworthy.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT