From: Durant Schoon (durant@ilm.com)
Date: Wed Jun 13 2001 - 12:50:00 MDT
> From: Jimmy Wales <jwales@bomis.com>
>
> Heck, I'll give you $1000 if I "decide to let you out".
>
> But, notice what I'm doing already. By making strong claims and
> putting my money and reputation on the line, I increase my mental
> resistance.
It seems sort of strange to even want to try to create an unFriendly
AI and then interact with it, when Eli has gone to the trouble of
describing the steps to building a Frienly one. Or maybe the argument
is: "If we had one, we could study it safely".
I'm imagining a similar argument of going into a room to withstand
a dangerous technology. I'm imagining someone saying:
"I've created a strain of airborne ebola, wouldn't it be interesting
to see how it works, locked alone in a room with a vile of it and plenty
of observation instrumention?"
"Hey, I'll do that, but I'm going to use an extremely trustworthy
CleanRoom3000(tm) suit to protect me. Nothing has ever gotten through
one of those."
I confess, that a UFI might be more interesting to interact with than,
say, a dangerous strain of airborne ebola...but not *that* much more
interesting if you consider the downside.
-- Durant Schoon
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT