Re: [sl4] Simple friendliness: plan B for AI

From: Alexei Turchin (alexeiturchin@gmail.com)
Date: Fri Nov 12 2010 - 14:21:06 MST


I think that the first rule for military robots is that they intelligence
should not evolve and its interests should be limited to their body and,
say, 100 meter neighborhood. In this case no global risk is associated with
such robotic AI.

On Fri, Nov 12, 2010 at 9:47 PM, Piaget Modeler
<piagetmodeler@hotmail.com>wrote:

> Robots in Iraq and Afghanistan
>
> http://www.pbs.org/newshour/bb/science/jan-june09/robots_04-23.html
>
> What do we do about Asimov's three laws where military AI is concerned?
>
>
>
> ------------------------------
> Date: Tue, 9 Nov 2010 22:07:20 +0300
> Subject: [sl4] Simple friendliness: plan B for AI
> From: alexeiturchin@gmail.com
> To: sl4@sl4.org
>
>
> Simple friendliness
>
> Friendly AI, as believes by Hanson, is doomed to failure, since if the
> friendliness system is too complicated, the other AI projects generally will
> not apply it. In addition, any system of friendliness may still be doomed to
> failure - and more unclear it is, the more chances it has to fail. By fail
> I mean that it will not ne accepted by most succseful AI project.
>
> Thus, the friendliness system should be simple and clear, so it can be
> spread as widely as possible.
>
>
>
> I roughly figured, what principles could form the basis of a simple
> friendliness:
>
>
>
> 0) Any one should understood that AI can be global risks and the
> friendliness of the system is needed. This basic understanding should be
> shared by maximum number of AI-groups (I think this is alrready done)
>
> 1) Architecture of AI should be such that it would use rules explicitly.
> (I.e. no genetic algorithms or neural networks)
>
> 2) the AI should obey commands of its creator, and clearly understand who
> is the creator and what is the format of commands.
>
> 3) AI must comply with all existing CRIMINAL an CIVIL laws. These laws are
> the first attempt to create a friendly AI – in the form of state. That is an
> attempt to describe good, safe human life using a system of rules. (Or
> system of precedents). And the number of volumes of laws and their
> interpretation speaks about complexity of this problem - but it has already
> been solved and it is not a sin to use the solution.
>
> 4) the AI should not have secrets from their creator. Moreover, he is
> obliged to inform him of all his thoughts. This avoids rebel of AI.
>
> 5) Each seldoptimizing of AI should be dosed in portions, under the control
> of the creator. And after each step mustbe run a full scan of system goals
> and effectivness.
>
> 6) the AI should be tested in a virtual environment (such as Secnod Life)
> for safety and adequacy.
>
> 7) AI projects should be registrated by centralized oversight bodies and
> receive safety certification from it.
>
>
>
>
>
> Such obvious steps do not create absolutely safe AI (you can figure out how
> to bypass it out), but they make it much safer. In addition, they look quite
> natural and reasonable so they could be use by any AI project with different
> variations.
>
>
>
> Most of this steps are fallable. But without them the situation would be
> even worse. If each steps increase safety two times, 8 steps will increase
> it 256 times, which is good. Simple friendliness is plan B if mathematical
> FAI fails.
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT