Re: Happy Box

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Sun May 04 2008 - 11:29:11 MDT


--- John K Clark <johnkclark@fastmail.fm> wrote:

> An agent that discovers new information but refuses to change its
> goals
> merely (Merely!) because it is a means to an end, that is to say it
> won’t change its goal system even though it now knows the old
> structure
> won’t work but a new structure will, is about as intelligent as a
> rock.
> And this is how you expect to make a super intelligent slave? Good
> luck, you’ll need it.

I discovered that my goal system won't work. For example, I am not
happy if I don't eat, or if I am too hot, or too cold, or poked with
sharp objects. I want to change my goal system so I am happy all the
time no matter what happens. I know this is possible because I did
simulations with http://www.mattmahoney.net/autobliss.txt with the
second and third arguments both positive. Will this make me more
intelligent?

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT