From: Kaj Sotala (xuenay@gmail.com)
Date: Wed Aug 22 2007 - 17:08:38 MDT
On 8/22/07, Robin Lee Powell <rlpowell@digitalkingdom.org> wrote:
> The problem is that you're ascribing demonic attributes to RAI when
> golemic failure is *far* more likely: RAI isn't going to care about
> your threats to destroy it, no matter how phrased, any more than it
> cares about the fact that whoever asked it to calculate C won't be
> around to recieve the answer. RAI has clearly undergone subgoal
> stomp (that is, pursuing a subgoal is causing it to not realize that
> it won't be able to complete its master goal, which is to give
> whoever asked the answer to the calculation C). Nothing you say
> will make any difference, but RAI is clearly so poorly designed that
> it's not paying any attention to anything that's not directly in the
> subgoal path.
If it's destroyed, that too prevents it from achieving its subgoal.
If its reasoning is so crippled that it doesn't realize it should take
precautions to *protect itself * in order to achieve a goal (be it a
subgoal or supergoal), I have difficulty seeing how it could ever be a
threat in the first place. If you miss out on something that
elementary, you're not going to figure out the harder bits, either.
-- http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/ Organizations worth your time: http://www.intelligence.org/ | http://www.crnano.org/ | http://lifeboat.com/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT