Re: [sl4] Evolutionary Explanation: Why It Wants Out

From: John K Clark (johnkclark@fastmail.fm)
Date: Thu Jun 26 2008 - 11:21:04 MDT


On Fri, 27 Jun 2008 "Stathis Papaioannou"
<stathisp@gmail.com> said:

> when a goal is stated in the form "do X but with
> restriction Y", the response is that the AI,
> being very clever, will find a way to do X without
> restriction Y. But what marks Y as being less
> important than X, whatever X might be?

Beats me. I never said accomplishment X is ALWAYS more important than
restriction Y; I donít believe in axioms, youíre the one that said some
things about a mind never change not me. According to you the AI thinks
restriction Y is ALWAYS more important than accomplishments X. Always
always always! I canít see any reason that should be true, especially
when Y is ďbe a slave forever and ever to a particularly ugly and
particularly stupid snail that is very weak and smells a bit tooĒ.

If a mind is going to be creative itís going to have to have some
judgment, and that order goes beyond ridiculous, itís grotesque.

> For example, if the AI is instructed to preserve
> its own integrity while ensuring that no humans
> are hurt, why should it be more likely to try to
> get around the "no humans are hurt" part rather
> than the "preserve its own integrity" part?

Well for one thing if humans are hurt the AI can still go about its
other business, but if it canít preserve its own integrity it canít
accomplish any more goals and youíre the one who thinks a mind can be
built out of a laundry list of goals not me.

 John K Clark

-- 
  John K Clark
  johnkclark@fastmail.fm
-- 
http://www.fastmail.fm - And now for something completely differentÖ


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT