Re: [sl4] Friendly AIs vs Friendly Humans - Problem of Sociopathic Governance

From: Jake Witmer (
Date: Thu Jul 14 2011 - 09:16:23 MDT

>"is it possible to avoid "Unfriendly Humans" and encourage "Friendly Humans"?"

You are asking if anyone around here has a general solution to the
problem of evil. I have a hunch the answer is no.
"avoid" entirely? Probably not possible, especially if you're talking in terms of existence.  Sociopathy in society seems partly related to genetics and phenotype (genetics + early environment).

However, institutions that apply randomized intelligence as a check on power typically do very well at diminishing human evil.  For instance: most adults who see a clear evil (one much stronger child attacking another smaller one, perhaps a toddler) would separate the two toddlers.  Especially if the damage being done was extreme. The thought process goes something like "nothing justifies that, the fight is one sided, the child has not yet learned basic discipline, etc..."  There are multiply-redundant reasons why staying uninvolved is not the smart thing to do.  Even a sociopath might get involved on the side of good, because, although he has no conscience and might not care about the result:
1) The child he saves may have grateful parents
2) Those parents might reward him
3) If he does nothing, he's learned that society would consider him to be evil, and it would attract undue attention to him and his lack of conscience
4) The possible cost / risk to himself for separating the two children is low

Similarly, juries may include one sociopath and several unintelligent or uncaring sociopaths.  But the default action of the properly-functioning state or constitutional republic is to not punish.  The jury is called to decide if a punishment is warranted.

If the jury contains one single honest and intelligent person of conscience, it will nullify.  This pressure (any inclusion of an honest person in 1/12 random mixes nullfies) is a pressure toward lack of punishment in most cases, but FOR punishment in cases where it is clearly warranted.  Lack of punishment in most cases is normal and beneficial.

So yes, to a vastly greater extent, friendly humans can be selected for.

If the norm is the perverted juries we now have, when a superhuman AGI is born and taught, then it will likely learn that human-level intelligences are governed by the law of the jungle.

Right now, intelligent juries are rare, because since 1850 in the USA, they have been hand-picked by the prosecutor.  Prosecutors are self-selected for conformity and obedience to injustice, via lawschool (and this is recursive. One generation of bad results encourages even more uniformly bad results from the second generation, and so forth.).

By eliminating perverse incentives that don't account for sociopathy within human legal systems, human governance can be increased in intelligence.  More intelligence generally equals more friendliness, except in rare cases where that's not true.  Unfortunately, right now, those statistical abnormalities (sociopaths) govern the rest of us, because the majority of voters tend to let them.

Restrictions on who can vote tend to only worsen this problem, since then, the sociopaths concern themselves with being included in that group, and tailoring it to their goals (unions, prosecutors, state workers, military, "lords," etc...).  Restrictions on government power, and inclusion of random intelligences, on the other hand, tend to work very well.

What works well should be a guide to the future, in developing as large a majority as possible of friendly humans, acting in benevolent ways.  The best way to train an AGI toward friendliness and benevolence is to give it examples that can be recognized by its neo-cortical intelligence, and then to unite the examples with libertarian theory.

"Friendly" humans are libertarians.  They comprise a small minority of the entire human population.  To the extent a human is non-predatory or non-parasitic, he is acting as a libertarian.  Those who parasitize their families and business associates because of incompetence should probably be seen as less parasitic than those who parasitize society and millions of strangers, consolidating power in the hands of true sociopaths (like Stalin, Hitler, Mao, Pol Pot, Pinochet, etc...).

Jake Witmer

312.730.4037skype:  jake.witmer
Y!chat:  jake.alfg 

"The most dangerous man to any government is the man who is able to think things

out for himself, without regard to the prevailing superstitions and taboos. Almost

inevitably, he comes to the conclusion that the government he lives under is

dishonest, insane, and intolerable." -H. L. Mencken

"Let an ultraintelligent machine be defined as a machine that can far surpass all the
intellectual activities of any man however clever. Since the design of machines is one

of these intellectual activities, an ultraintelligent machine could design even better

machines; there would then unquestionably be an 'intelligence explosion,' and the

intelligence of man would be left far behind. Thus the first ultraintelligent machine is

the last invention that man need ever make."
-I. J. Good

--- On Wed, 7/13/11, John K Clark <> wrote:

From: John K Clark <>
Subject: Re: [sl4] Friendly AIs vs Friendly Humans
To: "sl4 sl4" <>
Date: Wednesday, July 13, 2011, 5:46 PM

On Tue, 21 Jun 2011 "DataPacRat" <> said:

>"is it possible to avoid "Unfriendly Humans" and encourage "Friendly Humans"?"

You are asking if anyone around here has a general solution to the
problem of evil. I have a hunch the answer is no.

  John K Clark

  John K Clark

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT