RE: Ben what are your views and concerns

From: Peter Voss (peter@optimal.org)
Date: Sun Oct 01 2000 - 23:54:34 MDT


Ben, thanks for posting the excerpt from your book. It raises month' worth
of discussion; let me comment on just two issues:

I'm not sure why you would choose 'compassion' as your primary social
virtue. I would think that honesty & fairness (plus a dash of benevolence -
giving the benefit of doubt) among intelligent agents is more likely to
optimize effectiveness. They would interact on the basis of the trader
principle. Agents would trade information and specialized computational
resources with each other. Trade can be based on a barter, money, or some
virtual currency - negotiated, auctioned, or whatever. Networks of
reputation and 'policing' agencies would minimize motivation for 'cheating'.

However, a more fundamental problem is that you seem to assume that all
intelligent actors (like Webmind) will have roughly the same amount of power
and intelligence - else cooperation would not be the likely outcome. I think
it far more likely that a particular AI (including all of its distributed,
but tightly coupled intelligence) will be have far superior intelligence,
and thus totally dominate lesser agents. It's hard to see how one ends up
with stable communities of intelligent agents. Additionally, if this AI can
boot-strap its raw intelligence (something many of us are working on), one
clearly runs the risk of even more fundamental dominance.

peter@optimal.org www.optimal.org



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT