RE: Self-modifcation of goals bad?

From: Will Pearson (w.pearson@mail.com)
Date: Wed May 01 2002 - 03:27:57 MDT


> This is a hard distinction to make. In a system with a rigid goal system
> and a modifiable cognitive system, how do you stop the cognitive system from
> drastically changing its *interpretation* of the goal? ... thus
> *effectively* changing the emergent goal system of the overall mind, while
> not changing the part of the system that you have chosen to isolate and name
> the "goal system."
>

Strictly speaking, I am not working on real AI. I was just transposing the intuitions I got from thinking about a similar system to your attempts at realAI. All the programs are inside a virtual machine, that stops them affecting the goal. I am working on a form of reinforcement learning, that is when the system does something good it gets rewarded. Actors or programs as I call them in my system get a reward for doing something good. They need this reward to survive in the hostile evolving environment they live in. Although they can band together and give each other this reward, a form of altruism. I am starting working at the very low level to start with seeing the dynamics of the system. Although I believe that with the correct starting programs it can exhibit intelligent behaviour I am a long way from that. All the behaviour of the system is governed by the programs inside. No externally defined variation or selection pressure, apart from the implicit goal of the reinfor!
cement function.

A cognitive actor within this framework that said that the goal was something it wasn't would not get reward from the actor that performs the action and soon become unfit and die.

 Will

-- 
_______________________________________________
Sign-up for your own FREE Personalized E-mail at Mail.com
http://www.mail.com/?sr=signup


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT