From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Wed Jul 13 2005 - 15:13:43 MDT
On Wed, Jul 13, 2005 at 04:58:39PM -0400, Carl Shulman wrote:
> Faced with the first seed AI, why not ask it for a rigorous and
> humanly comprehensible set of principles for designing a FAI, such
> that the programmers could prove to their satisfaction that the
> result would be Friendly?
Because if it's smarter than you, it can probably design a set of
instructions that would seem obviously sensible to you that would
actually produce a Horrible Abomination, in the same way you could
trick a dog to dump acid on itself by placing a piece of meat at the
right point.. See "Fire Upon The Deep", by Vinge.
-Robin
-- http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/ Reason #237 To Learn Lojban: "Homonyms: Their Grate!" Proud Supporter of the Singularity Institute - http://intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT