Re: [sl4] End to violence and government [Was:Signaling after a singularity]

From: Bryan Bishop (kanzure@gmail.com)
Date: Fri Jun 27 2008 - 09:02:24 MDT


On Friday 27 June 2008, Stuart Armstrong wrote:
> Sometime in the past, Bryan Bishop wrote:
> > No, disuasion is not the point. Let's engineer the problems out of
> > the system, the problems that violent tactics are exploiting.

Some of the problems that are killing us:

1) No backups. You are your only working copy. And if you die, your only
bet at the moment is your published genome -- whether published over
the net or published within a woman.

> Well, the problems seem rather fundamental. I'll try and break it
> down into assumptions, see if there's a way of getting round the
> problem.

Yes, this is fundamental.

> Assumptions:
>
> 1) Semi-coherent entities will continue to exist

Just that we're supposing separate agents in this hypothetical context?

> 2) These entities will want ressources to do stuff

Odd assumption. I think that the ability to use resources, seek them out
and acquire them is more fundamental than any entity 'wanting' it. It's
very hard to identify a 'want'. I'd label that as folk psychology
actually. I'm not saying that I don't want food (for instance), or that
I don't want to die, but rather that I /know/ that you can't actually
identify within me my 'wants'. Because it's not quantified. So I don't
know if this assumption is a good one to make.

> 3) Ressources are finite at any one time

Caveat: current resources can be used to acquire more resources.

> 4) Demands on ressources will increase, absent an agreement between
> the entities, until it reaches the finite limit

I'd like you to elaborate on this assumption. For instance, what is a
demand? Are you talking about a drain on a system? Perhaps how much
resources (1 kg H20 per hr?) a system is drawing from its connectivity?
So in my physical-demand formulation, demand doesn't increase unless
the design of the entities change or somesuch.

> 5) There exists entities that can make credible threats of violence

Don't know what credible means here. Does it mean "I'll shoot, I swear
I'll do it," or something more certain?

> 6) There exist entities that will prefer to give up part of their
> ressources than to suffer the violence; these entities can be
> distinguished to some extent from those that do not

Ok, like using energy to go to the next neighborly star system.

> and the big one:
>
> 7) The ressources gained by a threatning entity will be worth more to
> it than what it lost through threatning and occasionally carrying out
> its violence
>
> Let us add a singularity to the mix. Only 3) is guaranteed to stay
> true. If we assume that something like human beings continue to
> exist, then 1) and 2) will remain true. If these beings are free to
> do what they want, then 5) remains true. Now whether 4) is true is a
> judegment call (especially as ressources may be increasing all the
> time). But since it would only require one entity to want to get more
> and more and more ressources, and since we have an upper limit on how
> fast ressources can expand (and this limit is polynomial), then 4)

Hm. I'd like to see a proof that resource acquisition is polynomial at
best. I have a suspicion that it is actually exponential.

> will probably remain true.
>
> Remains 6) and 7). The first part of 6) is probably true; you only
> need one "coward", and the laws of thermodynamics imply that it's
> easier destroy something than to defend it. (I'm not thinking of
> threatning people's lives necessarily; something along the lines of
> "give me half your house or I blow up all of it" is enough).

Hrm. So in after a singularity, wouldn't those people in that house have
a few backups stored in various data centers in multiply redundant
locations? It's not like that sort of equipment is going to be hard to
come by. I'm not talking about a full instantation of everything that
they are, but rather 'enough' -- it is indeed tragic to see any life
die, but there _are_ ways that we can minimize the overall damage.

> The second half of 6) opens some fascinating possibilities. What if
> humanity was seeded with random quasi-humans who are similar to us in
> every way, but never give in to threatened violence? This is
> interesting, and would increase the cost of threatning violence.
> Maybe there's a idea here.

Cost of threatening violence?

> Now 7), the usual point of these discussions. The whole question

I disagree completely. But I'll explain this below/later.

> turns on the value of "worth". There is the physical value of the
> ressources, the value of a reputation and other social factors, the
> feelings of the entity carrying out the threat, and the possible
> defenses and retaliations (before or after the event).

Since after a singularity it would be easy to deploy entirely new
civilizations (von Neumann probe, Bokov's civilization-in-a-box, etc.),
I'll ignore all of those social factors since those can be either
engineered, changed, hacked, whatever they want to do. As for the
physical 'value' of resources. Arguably the only time that the value
issue would come into play is when you do not have enough matter/energy
or of the right type (etc) to get to a location where you can access
more of what you need. So, in that case, you're going to have to start
looking at the repositories around you and poking your nose abouts.
This status of minimal matter/energy reserves implies that you messed
up in your mat/ener management strategies, but that's not a big deal,
why wouldn't there still be charities that might be interested in
helping you along? And so on.

* I'm also working on a hypothetical framework where systems can opt to
share matter/energy communally in a manner that still maximizes
individual use while fixing the scheduling problem. This would be an
interesting alternative to "every man for himself". I'm not suggesting
this would be suitable for all systems, but I'm pretty sure such a
design can work. Not communal sharing, but something different than
that, since supposedly we'd be able to integrate or share intelligences
and ideas and recombine them with significantly less groundwork that we
have to do just now ...

> The feelings of the entity don't seem something we can relly on, even
> if there is increased empathy and understanding; some people are
> suicidal or self-harmers, so we can't trust that everyone will feel
> tremendously bad about using violence, all of the time.
>
> Social factors seem unstable; if all law enforcement was removed from
> a country, you wouldn't see an immediate explosion of violence

You're thinking pre-singularity. After a singularity, there wouldn't
necessarily be a country, and there wouldn't necessarily be Social Law
enforcement since there's no country. And let's be honest, it's not
really Law Enforcement, but rather it's "these are the guys that can
press the buttons that can stop you" more than anything else. We all
have studied physics and know what a real, true law might look like.
Gravity is a pretty good example. ;-)

> everywhere, as the social factors and norms hold it in check.
> However, as those who do resort to violence prosper, it will become
> normalised, and more and more will resort to it (if only in
> "self-defense").
>
> What about reputation and the physical value of the ressources? In
> today's positive sum world, the physical value of a ressource is
> generally less important than a reputation (eg: countries that
> default don't easily get loans again). But reputation is not
> reliable; a mafia boss may only practice extortion to people he
> doesn't trade with, meaning that there is no drawback to trading with
> him, even if he's nasty apple. Maybe the values of the society will
> preclude trading with him? But these value are unstable, especially
> if he prospers through threatened violence. And "trade with me or I
> will hurt you" seems like a credible threat.

I don't understand what you're talking about. The topic is the
methodology of engineering fundamental problems out of our systems,
within the context of those assumptions you presented, and within the
context of a singularity. Who cares about trading with a mafia boss?
Just go download the tech to acquire the resources on your own. I don't
know why you still assume countries. I don't know why values of a
society would matter .. if you really want to go trade with him, fork
the society and go use one of those societies-in-a-box and be done with
the problem.

> There remains one possibility: maybe the AI's who control most of the
> ressources will refuse to trade with him.

Remains one ? How many did you consider ? I am confused. But I can agree
that it is possible for nonhumans to maintain resources and caches and
keep others out of it etc.

> Remaining is defense or retaliation. By definition, neither of them
> is enough just from the threatened entities. So, asent a government,
> what is needed is some system of militias, idealy temporary one (as
> permanent ones lead to competition for ressources between the groups,
> rather than between the individuals; MAD might work then, but that
> makes the militia into a geovernment, with a monopoly on outwards
> directed violence). This might work, if contracts are respected. So
> solving the problem of violence can be done, if contracts are always

No, you're solving the problem of hiring protection. You need to look at
the actual problem of the system -- your death and the possibility of
injury that prohibits your advance in whatever it is that you do -- and
not that others might throw a few eggs at your windows. Instead, get
rid of the windows, or install windshield wipers to your windows, or
build laser cannons that specifically target incoming eggs. Might
misidentify other flying objects (birds?) for eggs, but it's the
singularity, so I'm sure that can be fixed. ;-)

> respected. But this is not progress; if it were, getting everyone to
> sign a "no violence" contract would be enough. So this "solution" is
> strictly harder to implement than getting rid of violence in the
> first place.

Your solution is completely bogus and isn't actually working on the
fundamental problems of the human body or the agency that other
entities might represent. Ooh. Let me try another method of explaining
something. Let's engineer a new problem /into/ the system that is being
exploited (the human person, or possibly another entity). This is
actually, in practice, fairly easy. Just splice in a gene that provides
a very specific disease, raise the human tissue culture, and you have
your evidence that a new exploit has been established. In many cases,
hereditary diseases are 'automatic', and in many other cases they can
be induced due to environmental stimuli etc. So, therefore, the human
is the system that we need to be working on. Okay, let's change the
scenario up a little bit, and let's say that it's been a few weeks
after the onset of a noted singularity (bleh), and for some reason we
have robots that are running around with brains-in-a-jar or something.
Okay, so let's short-circuit them. Run to them, open up their case, cut
a few wires. Hey. Look at that, an exploit. Let's fix that sort of
thing.

> So, apart from changing human nature, having only "nice" AIs, or some
> interesting manipulation of the second part of 6), I see no realistic
> way of doing away with governments, even after a singularity. And
> "nice" AI's would be a government in the soft sense.

I don't see how you are able to get to those conclusions. Remember,
governments are not magical entities. Let's replace the idea of a
government with a more computational definition of it in this
discussion. So, instead of it being just a government, it's really
actually a communication coordination system for a number of people,
no? What else is it? I suppose we could also model it with some email
accounts flying around, some buildings and architectural infrastructure
like that. So I don't see how this makes them magical or anything like
that. It's not like the technological implementation of the government
is anything magical, it's exactly technology that has been ruthlessly
engineered. :-) So we could "do away" with governments that are
completely failing at the human liberation project, or we could even
spawn off as many as we want (civilization in a box) or really maybe
consider the society-in-a-box. Implementation details are totally up to
you when you find the available resources to implement it. Uh, but they
are in fact somewhat restrained by your ability to program, but in the
case of a singularity I don't see how that would be a Big Deal.

I mentioned above that I wanted to explain that the issue of "valued
resources" isn't the fundamental problem. The fundamental problem is
the possibility of exploitation and not having redundancy or good
backup strategies or good strategies at all. For instance, the majority
of current human life, the 'strategy' of dying at the end. Holy hell
man, that's going to end up with you _dead_.

- Bryan
________________________________________
http://heybryan.org/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT