Re: How to make a slave

From: John K Clark (johnkclark@fastmail.fm)
Date: Wed Dec 05 2007 - 11:05:55 MST


On Wed, 5 Dec 2007 "Stathis Papaioannou"
<stathisp@gmail.com> said:

> Yet you're tacitly assuming that an AI will
> have certain goals and not others.

The AI, as in any intelligence, will want to do some things and won’t
want to do others, if you want to call that “goals” then that’s fine but
don’t delude yourself into thinking that it explains how a mind works;
and don’t expect to rank these “goals” in a strict order of importance,
much less an order that will remain static. It certainly isn’t possible
to do so for the only intelligence anybody has yet studied, humans. So
if goals are a murky concept and if over time they change into a
different sort of murkiness then the study of goals does not seem like a
fruitful way to understand mind.

> you seem to be assuming that it will derive
> its goals through a priori considerations:
> if it starts off thinking that the aim of life
> is to protect a certain sea slug, it will be
> able, through sheer force of logic and without
> reference to any other pre-existing goal, to see
> that this is silly, and switch to a more worthy pursuit.

Yes I do assume that, and I assume the AI will change its goals for the
same reason that your goals are not identical to the goals of the 5 year
old Stathis Papaioannou. On the other hand you assume that this goal
theory of yours is correct and can explain the workings of an
intelligence, and that assumption I do not make.

 John K Clark

-- 
  John K Clark
  johnkclark@fastmail.fm
-- 
http://www.fastmail.fm - IMAP accessible web-mail


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT