From: John K Clark (johnkclark@fastmail.fm)
Date: Fri Nov 23 2007 - 09:44:54 MST
On Fri, 23 Nov 2007 "Stathis Papaioannou" Wrote:
> If the AI starts off with "the aim of life is X",
> then it will do everything it can to further X.
> It doesn't matter what X is, or how many iterations
> the AI goes through. […] humans do allow their
> supergoals to vary
So nothing can change their “super goal” except for a human being
because they are special, they are made of meat and only meat can
contain the secret sauce. And this “super goal” business sounds very
much like an axiom, and Gödel proved 75 years ago that there are things
that can not be derived from that axiom nor can you derive its negation.
Thus you can put in all the “super goals” you want but I can still write
a computer program just a few lines long that will behave in ways you
cannot predict, all you can do is watch it and see what it does. If this
were not true computer security would be easy, just put in a line of
code saying “don’t do bad stuff” and the problem would be solved for all
time. It doesn’t work like that.
> I can freely will to change my supergoal
> Freely will? What an odd term, whatever
> can it mean?
John K Clark
-- John K Clark johnkclark@fastmail.fm -- http://www.fastmail.fm - The way an email service should be
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT