From: Richard Loosemore (firstname.lastname@example.org)
Date: Fri Feb 03 2006 - 17:20:51 MST
About this discussion of AGI motivation (which is what it is).... I am
astonished at the wild guesses being thrown around.
What a sentient creature *does* will depend on the motivational system
(call it a "goal system" if you like: I have reasons for wanting to
generalize to the term "motivation") that is designed into it. It does
not get to *be* an AGI unless someone designs a motivational system to
make it do things.
So what an AGI will feel compelled to do will depend totally on a design
factor, and the answer to all these questions about whether they will
kill everything, be friendly, not be friendly, etc, is...... "well, tell
me what its designer designed it do, and I'll tell you."
Does anyone know that back in the early days of telephones, they used to
think maybe that if they made one connection too many in the worldwide
network of wires, the phone system might become intelligent? It took a
while before people realized that intelligence doesn't just pop up out
of nowhere when there are enough wires.
Same with AGI motivation. You don't ust build an intelligent system and
then switch it on and find out what motivates it: you don't get
motivational systems for nothing ("no free lunch for motivational system
engineers"). If there is no motivational system in it when you switch
it, it's just a lump of expensive porridge.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT