From: Simon Gordon (firstname.lastname@example.org)
Date: Mon May 19 2003 - 17:24:02 MDT
--- email@example.com wrote:
> > Ironic splutter aside....the "real problem" seems
> to sort itself out if we
> > release a society of FAIs into the physical world
> at exactly the same time.
> That doesn't seem to work for all humans.
Firstly humans are not all equal, they are born at
different times in different generations and their
social organisations are hierarchical.
Humans are impure intelligences for many reasons. They
began in an environment of scarcity and so evolved to
become fiercely competitive, unfortunately this
translates into corruption in many adult humans in the
modern urban environment we are now living in. AIs of
course can be brought up in any environment we like
and they do not need to evolve in the same way our
society has; in the absense of scarcity there in no
need to be super competitive. It may seem unlikely
that AIs will naturally learn to become intrinsically
friendly in a societal structure, but they will at
least have to "appear" friendly since if they are all
of equal power and status there would be a stalemate
scenario, a standoff, like two huge nucleur powers
aware that they cannot attack each other without
themselves getting destroyed, thus the AIs would be
bred into an environment of cooperation. If they were
subsequently brought into the physical world (together
as a collective at the same time) then they would have
to continue their cooperation because each would gain
their own sysop-like control over the physical
environment and eventually become superpowers.
> > then they would naturally evolve to cooperate with
> each other having played
> > lots of Prisoner's Dilemma type games and learned
> that overall the best
> > strategy was friendly cooperation.
> The larger the group of cooperaters, the bigger the
> potential payoff for a
> lone defector.
It depends how the network of players is organised, on
an individual one-to-one basis the optimal strategy is
tit for tat with cooperation as the first perogative.
In a larger group, in order to secure the overall
good, the majority of AIs will cooperate, but this
type of behaviour has to be learnt through an extended
period of societal interaction. From this "moral good"
comes about, in a kind of holistic fashion.
The ideal of course is that ALL the AIs or members in
the group learn the "moral good" to the extent that
they have become resistent to the idea of personal
defection and are able to ignore the potential
payoffs. It is certainly true that our human society
is nowhere near this ideal....but AIs can run faster
than us, they could outpace us in a few years perhaps
and reach this ideal much quicker. When this happens
they will have matured to a sociological level far
more advanced than humans and so by then we would be
able to safely call them FAIs , they would have become
advanced socially responsible agents, and it seems
plausible that then we would be able to put greater
trust in an FAI than we could in any human being, even
our closest and dearest.
Trust no-one - except the FAIs.
For a better Internet experience
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT