From: Michael Roy Ames (firstname.lastname@example.org)
Date: Sat Feb 08 2003 - 17:23:16 MST
Ben Goertzel wrote:
> Michael, I read your post carefully but I'm not sure if you disagree
> with my perspective or not. I am not suggesting shielding the AGI
> from anything, but I'm also suggesting avoiding explicitly training
> it to be a "winner" in competitions.
Training an AGI to be a "winner" in competitions before the AGI
understands the wider context of game-playing, and the non-zero-summess
of the competition within that wider context, would be a mistake. But
once the AGI can understand the context, and how humans
games/competition operate positively within it, then it would be
counterproductive to *avoid* training how to win games.
In the a competition between humans and/or AGIs, the 'winner' should be
**everyone**, because winning a game/competition should be a subgoal
embedded within a larger context. In a way, this is the old
subgoal-supergoal mixup. (Part of) The supergoal should be that: we all
win together. One of the subgoals might be: by learning how to win a
game, I can help everyone succeed even better. If the subgoal does not
lead to the supergoal, then it is not a desirable subgoal.
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT