RE: [sl4] Simple friendliness: plan B for AI

From: Piaget Modeler (piagetmodeler@hotmail.com)
Date: Tue Nov 23 2010 - 08:49:23 MST


So no one has touched the military AI question.
(Debunking Asimov's laws is easier, it seems.)

What about the fundamental fact that AI systems (e.g., Robots)
can be (and are today) programmed to kill people? What hope
would we have of building friendly AI given that sociopathic AI
is concurrently being developed?

> From: johnkclark@fastmail.fm
> To: sl4@sl4.org
> Subject: RE: [sl4] Simple friendliness: plan B for AI
> Date: Mon, 22 Nov 2010 22:50:47 -0800
>
>
> On Sat, 13 Nov 2010 "Piaget Modeler" <piagetmodeler@hotmail.com> said:
>
> > Would Asimov's three laws be an easier starting point?
>
> Sure it would be easier if Asimov's three laws could work in the real
> world and not just in stories, but there is no way they could.
>
> > If not, why not?
>
> Because sooner or later somebody is going to order the AI to prove or
> disprove something that is true but not provable (the Goldbach
> conjecture perhaps) so it will never find a counterexample to prove it
> wrong and never find a proof to show its true and so the AI would enter
> an infinite loop; that's why human minds don't operate on a static goal
> structure and no intelligence could.
>
> John K Clark
>
>
> --
> John K Clark
> johnkclark@fastmail.fm
>
                                               



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT