Re: [sl4] Simple friendliness: plan B for AI

From: Charles Hixson (
Date: Tue Nov 23 2010 - 11:21:18 MST

On 11/22/2010 10:50 PM, John K Clark wrote:
> On Sat, 13 Nov 2010 "Piaget Modeler"<> said:
> ...
>> If not, why not?
> Because sooner or later somebody is going to order the AI to prove or
> disprove something that is true but not provable (the Goldbach
> conjecture perhaps) so it will never find a counterexample to prove it
> wrong and never find a proof to show its true and so the AI would enter
> an infinite loop; that's why human minds don't operate on a static goal
> structure and no intelligence could.
> John K Clark
That's independent of Asimov's three laws. Valid, but independent. If
you could formulate a system that would work with those laws, it could
quite easily be interrupt driven or parallel threaded, with any one
process only allowed to occupy so many threads and such a percentage of
the compute cycles. But even defining what is human is essentially
impossible. (Which made them great story devices.)

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT