From: Matt Mahoney (matmahoney@yahoo.com)
Date: Fri Feb 20 2009 - 09:40:02 MST
--- On Fri, 2/20/09, Petter Wingren-Rasmussen <petterwr@gmail.com> wrote:
> As I mentioned in *this
> <http://www.sl4.org/archive/0902/19805.html>* thread
> i think an AGI with hardcoded dogmatic rules will have some
> serious
> drawbacks in the long run. I will try to show an
> alternative here, that
> still will remain
> friendly to humans.
>
> I base this on previous work with AIs in gaming theory such
> as the tit for
> tat tournaments
> <http://en.wikipedia.org/wiki/Tit_for_tat>
>
> Note that the ideas below are rough outlines and that my
> programming skills
> are rudimentary.
> (Cognivitive-behavioural theory is my field.) My reason for
> posting this is
> that you hopefully will point out problems/faults that I
> havent noticed
> myself, and maybe even refer me to similar work that's
> already been done.
>
> Now lets start with a lot of simple Ais (semirandom neural
> networks) in a 2
> dimensional virtual landscape, arranged like a checkerboard
> (but a lot more
> squares) where each ai take up one randomly assigned slot
> and can move one
> step at a time in any of the eight directions (like the
> king in chess).
> The Ais are able to move around freely and detect if they
> get close to
> another AI. Each AI also has zero points from the start.
>
> Input channels: direction and distance to nearby Ais,
> detecting output from
> adjacent Ais, change of current points. They will also be
> able to recognize
> and remember each individual AI.
> Output channels: movement command, some kind of output,
> binary string will
> be sufficient.
>
> Rough outline of points:
>
> Using a certain amount of cpu will cost 1 point. (To avoid
> slowing down the
> whole system unnecessarily with meaningless loops.)
> Creating output will cost 10 points.
>
> Detecting output from adjacent Ais: will give a lot of
> points 1st time its
> detected from every individual AI, but will decrease
> rapidly if detected
> from the same AI several times.
> Say 1000 points for the first time, 100 second,10 3rd and 0
> for the fourth
> time. The points received will increase by 1 point per turn
> that no input is
> received from that particular AI.
>
> Let's take the 10% that reach 100 000 points first and
> make 10 new versions
> of each with slight variations(through some kind of genetic
> algorithm). Then
> repeat the experiment for a few generations.
>
> A behaviour similar to the winners of the tit for tat
> tournaments can be
> expected to develop.
> It is important that similar rewardsystems are kept
> throughout the
> development (ie you earn a lot from recieving and only
> loose a little from
> giving).
>
> After a while the Ais can be expanded to be more complex
> and criterias for
> points can be tougher both socially (partial imitation
> (which is the
> simplest form of empathy behaviouristically speaking),
> linguistically(greeting phrases, farewell phrases,
> eventually learning
> speech) and spatially (ability to navigate around obstacles
> and through
> labyrinths, adding more dimensions, manipulating objects
> etc.)
>
> When the tasks differs in this way, one thing will remain
> constant - the AIs
> that keep on doing what they do when they get points will
> prosper and a
> mechanism for encouragement/happiness has been implemented.
>
> Eventually it might be possible to introduce them into a
> virtual world like
> second life and evolve more complex rating systems (ie
> ratings by human
> avatars, moneys earned, objecs created). After that,
> anything would be
> possible.
>
> The whole point of this venture is that I believe it will
> result in a
> social AI that intuitively reacts and thinks in the same
> way that we do.
>
> Im looking forward to your criticism.
I think your initial approach to evolving tit-for-tat strategies in simple environments should work. However, when AI reaches human level intelligence (they can do everything the human brain can do) then you are in danger of no longer controlling the awarding of points. When the AI can interact with you through language, it could convince you to modify the program that controls their evolution. If the AI knows everything that you do, then there is no way for you to tell if it is helping you achieve your goals or not. It knows exactly which lies it can get away with. It might be following tit-for-tat with you, or it might realize that eliminating humanity would be its final move with you.
-- Matt Mahoney, matmahoney@yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT