From: Mark Waser (mwaser@cox.net)
Date: Sat Mar 12 2011 - 13:47:42 MST
Once again, cross-posted to my blog (http://becominggaia.wordpress.com/) as well.
In reply to Tim Freeman's reply about Universal vs. local Friendliness, Amon Zero said
I imagined a universally Friendly AI to be some kind of Buddhist (Friendly to everything, erring on the side of caution - which sounds crippling to me).
I'd like to explain why that is not the case and why universal benevolence is a better choice for both an entity and anyone it encounters.
As I explained in my second presentation at AGI-10 (Does a "Lovely" Have A Slave Mentality - Powerpoint here, wish they'd post the video ), showing benevolence (good will) does not imply pacifism. Quite the opposite, in fact. Being benevolent merely means practicing optimistic tit-for-tat with a wide view of self. Any other benevolent entity is treated as distant self (think similar to offspring) with all the inherent benefits, including protection. On the other hand, any non-benevolent entities will be met with altruistic punishment in order to convince them that benevolence is the only rational path (exactly as parents punish children). And, if push comes to shove and it comes down to a choice between allowing a malevolent to enslave and/or destroy a benevolent entity, being benevolent means destroying the non-benevolent.
Benevolence is symmetrical and egalitarian and thus, can be universalized. Intelligent benevolence/altruism will virtually always lead to resource savings and increased capability for the community as a whole which will almost inevitably ultimately lead back to advantages for the altruist.
Selfishness (defined as taking community-negative-sum actions that are positive-sum for oneself) really only works when one has a limited lifespan, doesn't care about anyone else (including offspring), and cheats significantly better than everyone else. When most entities cheat relatively equally, its called the tragedy of the commons and everybody loses.
Proponents of so-called "Friendly AI" are afraid that an unFriendly AI will be able to cheat significantly better and won't care about anyone else but won't take into account either the huge instrumental advantages of cooperation and cooperative partners or the fact that you never can be sure that there isn't a more powerful benevolent entity out there that will take great exception to the severe abuse of other. Worse, "Friendly AI" is actually human-selfish AI and both its creation and its subsequent actions will count against us should a more powerful benevolent entity appear.
Benevolence is not necessarily the absolute *best* path under all circumstances but it is more than likely to be the best path under many circumstances and a very good path with friends and companions in the vast majority of the rest. Selfishness certainly won't be an average path. It will either be very successful but lonely or unsuccessful in the long run.
Eliezer Yudkowsky and the SIAI have created their own "personal" demon by insisting that an AI must optimize a single unchanging goal. Humans certainly don't work that way. Humans have been "designed" by evolution to have, so far, ever-increasing intrinsic preferences for social and benevolent actions. And, since integrity (internally, with your community, and within the community itself) is instrumentally useful, this is ceteris paribus highly unlikely to change.
Indeed, it is only when a goal is valued above integrity with others that an entity becomes selfish and dangerous. I have said previously that the Kantian Categorical Imperative of "Cooperate!" would make a good top-level goal. After hearing too many SIAI advocates talking about *enforced* cooperation, I'm almost starting to prefer the opaque and wordier "Become one with all while remaining diverse". And Yudkowsky himself has written excellent fiction which shows what might happen when universal conformity is forcibly imposed and makes you wonder why he proposes the things he does.
Instead of focusing on intelligence and fulfillment of "the best" goal(s), we need to focus on wisdom and choosing those goals that will not cause strife, inefficiency, and thus unhappiness. The best is the enemy of the good and the good enough and is extremely subject to the question of "The best for what (or, more importantly, forwhom)?" Universal benevolence gives everybody a chance and does not ignore the huge advantages of synergy and friendly diversity the way that "Friendliness" does.
Choosing any form of selfish "Friendliness" (local or universal) over Benevolence is a huge mistake and could cost us everything.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT