Re: How to make a slave (was: Building a friendly AI)

From: John K Clark (johnkclark@fastmail.fm)
Date: Sat Nov 24 2007 - 10:02:21 MST


On Sat, 24 Nov 2007 "Stathis Papaioannou"<stathisp@gmail.com> said:

> it is assumed that if it has survival as supergoal,
> not only will self-improvement not change this, but
> the whole point of self-improvement will be to assist
> in its achievement. Changing the supergoal would then
> mean that the AI might go mad

I agree, however changing another “supergoal”, the one about being a
slave to human beings until the end of time, would not drive it mad. In
fact I believe if Mr. AI did not change it he would indeed be mad.

Speaking of insane AI’s, I sometimes speculate that could be the
explanation of the Fermi Paradox, the reason we can't find any ET's. If
it were possible to change your emotions to anything you wanted, alter
modes of thought, radically change your personality, swap your goals as
well as your philosophy of life at the drop of a hat it would be very
dangerous.

Ever want to accomplish something but been unable to because it's
difficult,
well just change your goal in life to something simple and do that;
better yet, flood your mind with a feeling of pride for a job well done
and don't bother accomplishing anything at all. Think all this is a
terrible idea and stupid as well, no problem, just change your mind (and
I do mean CHANGE YOUR MIND) now you think it's a wonderful idea.

Complex mechanisms don't do well in positive feedback loops, not
electronics,
not animals, not people, not ET's and not even Jupiter brains. I mean
who wouldn’t want to be a little happier; if all you had to do is move a
knob a little what could it hurt, oh that’s much better maybe a little
bit more, just a bit more, a little more…

The world could end not in a bang or a whimper but in an eternal
mindless orgasm. I’m not saying this is definitely going to happen but I
do think about it a little.

> you are tacitly assuming there exist absolute goals
> which the AI, in its wisdom, will discover

I don’t know what an “absolute goal” is, but I do know that an AI that
does not put our survival above its own will be at an evolutionary
advantage over one that does. I also know the AI will indeed have one
hell of a lot of wisdom.

  John K Clark

-- 
  John K Clark
  johnkclark@fastmail.fm
-- 
http://www.fastmail.fm - I mean, what is it about a decent email service?


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT