Re: More silly but friendly ideas

From: Panu Horsmalahti (nawitus@gmail.com)
Date: Wed Jun 04 2008 - 12:24:50 MDT


There is no contradiction in these Friendly AI discussions. Different
artificial intelligence systems can be predicted with different ease. If you
program an AI from the ground up to be easily predictable in a general sense
(ie. follow some supergoal), then it is easier to predict than an AI not
specifically made for this purpose.
1. AGI which is not explicitly made to follow some "supergoal" will indeed
have unpredictable actions after it starts to improve itself
2. Friendly AI is a proposition that the AI should be carefully made to
follow some supergoal (protect humanity and follow human orders etc), but
this kind of AI is probably magnitudes more difficult than unfriendly AGI's,
hence it needs a concentrated effort to be created first.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT