RE: [SL4] Re: 'Singularity Realism'

From: Philip Sutton (
Date: Wed Mar 24 2004 - 21:44:19 MST

Hi Mike,

> Primate politics.

I think that's too much of a generalisation. It's not Bonobo politics or
Orang Utang politics. It certainly *is* human and chimpanzee politics
of a widespread sort but even in the human domain this sort of politics
is *not universal*.

Jared Diamond's book 'Guns, germs and steel' proposes a useful
framework for explaning why human communities vary between
imperial - barbaric - peaceful/cooperative.

The work by primatologist Frans de Waal is useful too (eg. book 'Good

> If we can create a Friendly AI that consistently adheres to a superior
> morality (i.e., our best morality) even without oversight, then we may
> have a situation where humans can relax the imperatives of primate
> politics. We would quickly learn that we can no longer get away with
> doing things the old way because the FAI will not allow it.

How would you actually see the FAI doing this? Being a universal
dictator? If we go down that path my guess is some faily nasty
primates (read humans) will try to hijack the FAI development process
so that they (the humans) can be in charge. If super-AIs have the
prospect of the being the most powerful agents around, then power-
hungry humans will be attracted to that potential power like moths to a

I think humans (a great many but of course not all) also demonstrate
great capacity for peace and cooperation (as well as
war/domination/exploitation) so if AIs simply worked together with
humans and human institutions committed to peaceful cooexistence
then I think the balance could be tipped decisively away from the
war/domination/exploitation end of the human spectrum. So under this
scenario the FAIs would not be designed for or be expected to play a
role as benevolent dictator (acting outside of any community-derived
law) - instead they would be partnered with humans and human
institutions that were working to achieve a peaceful non-exploitative
condition - and would work in support of the application of
democratically formulated law.

If FAIs had huge intellectual capacity, deep wisdom and considerable
reach in society they could act quite subtly to help nudge things in a
good direction without taking any extra-legal actions or acting as a
dictator - but simply by working *with* humans who also had the same
hopes for a peaceful collaborative democratic joyously unfolding

My current expectation is that a variety of AGIs will emerge with widely
varying degrees of friendliness - because they will be developed in a
number of organisations at about the same time and that *at least
some* of these AIs will be brought into the orbit of fairly unfriendly
humans by one means or another (eg. into militiary/intelligence,
commercial or criminal orbits). (I'm assuming a relatively slow start to
AGIs with any hard take-off occurring some years down the track).
Where things go from there will depend on what happens amongst
humans and whether there are any truly Friendly AGIs around (with
access to adequate computing and other resources) at the same time
as the unfriendly ones are extant.

At least for a few years (maybe longer) AGIs will be caught up in the
sort of politics that you started out calling primate politics.

You might be wondering why I would think AGIs that had reached hard
take off would bother restraining themselves to work with people
through democratic processes etc. There is no certainty about any of
this of course. The scenario I've painted is merely one possibility
among a vast array. But I think that if we are to get friendly AI at all we
will probably have to go through a period (however short) where there
is a pact between friendly AI and friendly humans for mutual benefit.
After that the universe (and beyond) is the limit for the AGIs (and any
uploads) but I can imagine truly friendly AGIs, while not being limited
by humans, would not have problems helping humans get their act
together in their own more limited human domain. It might be a nice
little nostalgic hobby for a few AGIs!

Cheers, Philip

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT