Re: [sl4] Re: Property rights

From: Bryan Bishop (
Date: Mon Jul 07 2008 - 14:58:30 MDT

On Monday 07 July 2008, Stuart Armstrong wrote:
> Lee mentioned governments a few posts ago. The only justification of
> governments is to solve collective action problems (violence being
> the king of these problems so far in human history; out of control
> evolutionary arms races might be the king issue after a singularity).
> I feel we underestimate the importance of this, because we live in a
> world saturated by governments (certainly in terms of GDP), so most
> of the collective action problems that can be solved have been
> solved. And a solved problem does not attract attention.

I am not convinced that 'justification' makes sense when it comes to
governments, or any generalized human organizational system. There's
design decisions that could be justified, I suppose, but I don't know
if justifying them and all that they do in one swoop is a useful action
to take. So, collective action problems. Note that the emergence of
peer-to-peer technology (pen, paper, computer, transceivers, ...) means
that deploying simple algorithms on a personal level precludes the use
of a centralized governing algorithm.

> So purely individual solutions to these sort of problems strike me as
> very unlikely. It seems that people's desciptions of a post
> singularity world always boost the aspects that make individuals more
> autonomous (total information, extreme mobility, etc...) while
> minimising those that would make them more dependent (advances in
> offensive weaponry, viral invarsions and the need to defend against
> them, finiteness of ressources, fast evolution filling the world with
> competing copies, hacks to make other minds more malleable, etc...)

That's an interesting observation, but take, for instance, the
distinction that you make between autonomous (extreme mobility) and
dependencies (offensive weaponary). It's well known that the top of the
line aircrafts are offensive weaponary in the militaries. The
difference? ITER to try to keep information from leaking out (even
though it doesn't work - I refer you to the build-your-own-missile-man,
who has been MIA on the net for a few years now).

In the case of immune system threats, consider the current methodology
of fighting back infections via the transferance of immune defense
mechanisms, antibodies, etc. So it's already on an individual level to
a large degree, and sometimes the medical science behind vaccinations
is just being exposed to a very strongly quantified amount of the
harmful agents.

There's also security measures to keep brains from blowing up, but a
good system administrator would be employing multiple strategies at the
same time.

> So, despite the arguments here, a society of humans able to modify
> their minds at will is probably not going to gravitate to something
> pleasant. On the other hand, it might not take much in terms of
> coercive interventions to allow such things to happen. In fact, it
> may be enough for the AI to gift humans with many technologies, push
> them in certain self-modifying directions, and then turn itself off,
> to create a positive dynamic equilibirum.

Why singular (the) AI?

- Bryan

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT