Re: [sl4] Simple friendliness: plan B for AI

From: Alexei Turchin (alexeiturchin@gmail.com)
Date: Tue Nov 23 2010 - 14:41:51 MST


Interesting question is:
"Is question of friendliness of AI is equal to the question of the meaning
of life?"

That is, if we think that infinite human life is final value, we will
program our AI respectively.
And then - is it possible to rationally create the meaning of life?

On Tue, Nov 23, 2010 at 9:50 AM, John K Clark <johnkclark@fastmail.fm>wrote:

>
> On Sat, 13 Nov 2010 "Piaget Modeler" <piagetmodeler@hotmail.com> said:
>
> > Would Asimov's three laws be an easier starting point?
>
> Sure it would be easier if Asimov's three laws could work in the real
> world and not just in stories, but there is no way they could.
>
> > If not, why not?
>
> Because sooner or later somebody is going to order the AI to prove or
> disprove something that is true but not provable (the Goldbach
> conjecture perhaps) so it will never find a counterexample to prove it
> wrong and never find a proof to show its true and so the AI would enter
> an infinite loop; that's why human minds don't operate on a static goal
> structure and no intelligence could.
>
> John K Clark
>
>
> --
> John K Clark
> johnkclark@fastmail.fm
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT