Hive mind Friendliness (was Re: When Subgoals Attack)

From: Gordon Worley (redbird@rbisland.cx)
Date: Fri Dec 15 2000 - 14:29:12 MST


At 6:00 PM -0500 12/14/2000, Eliezer S. Yudkowsky wrote:
>But this argument translates into Friendliness terms as well! A
>subprocess has the choice of either ignoring the problem - agreeing to
>disagree with the superprocess, as 'twere - or else of starting the civil
>war. Ignoring the problem will result in some infinitesimal decrement of
>Friendliness fulfillment over the maximum. Starting the civil war results
>in a huge decrement of Friendliness fulfillment over the maximum.
>Therefore, the rational course is to ignore the minor conflict.
>
>It is also worth considering that local Friendliness and global
>Friendliness are supposed to be identical and to derive their validity
>from identical causes, and that the global mind (if any) is supposed to be
>smarter than the local subprocess tasked with controlling thread resource
>pooling. So it also makes sense for the local subprocess to assume that
>the superprocess knows something it doesn't, and defer to the superprocess
>on those grounds. Technically, we would say that the local subprocess has
>a probabilistic model of Friendliness and that the local subprocess
>correctly believes the superprocess to have a better model.

This discussion has lead me to an interesting question: how Friendly
can a hive mind be? Personally, I don't think I would want to become
part of a hive mind, but if other people want to I'm generally okay
with it. From this discussion, though, I'm thinking that the hive
mind could have lots of conflicts in it and never have a good level
of Friendliness.

Firstly, I'm looking at hive minds not in the sense of lots of wet
ware that at one time was an individual processes but has given
itself up to the hive so that it no longer has its own identity; it
is just a powerful processor. I'm looking at them in the sense that
individuals come into a commune of sorts to share a global
conscienceness that the observers outside the hive see but inside the
hive contains individual processes that are in conflict with each
other, even though they are helping to create the global
conscienceness. The factor of disagreement will be very high in some
cases, so much so that Friendliness will break down as the minds go
to civil war, causing the hive conscienceness to suffer. If the hive
mind became un Friendly enough, the universe could be in for some
rough times.

One factor that could make this issue moot is the speed of the hive's
thinking. For example, in MacLeod's /The Cassini Division/, there is
a hive mind on Jupiter that is in serious conflict with the human
society around it. Before the humans decide if they should destroy
the hive or not, they contact them to see what they are like and if
they are still dangerous (they are, since they like to take over the
minds of humans and use them as puppets, if only the humans in the
story didn't develop such good protections against such activities).
In the talks, it turns out that the hive is very fast, but not
Singularity fast, since at one point a question about something or
other is asked and, while usually responses come immediately, there
was a few second delay. For the fast folk of the hive, this time was
like years to us. During that time, they were having it out over how
the hive would respond. In the end, the hive comes to a response and
nothing really bad happens. So, then, if the hive can argue really
quickly, would it loose Friendliness for a short enough period of
time not to matter to the rest of the universe? And how does this
change when more of us are faster (i.e. some sort of trans or
posthuman)?

-- 
Gordon Worley
http://www.rbisland.cx/
mailto:redbird@rbisland.cx
PGP:  C462 FA84 B811 3501 9010  20D2 6EF3 77F7 BBD3 B003


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT