Collective volition

Date: Mon May 31 2004 - 13:51:09 MDT

This is my first post on the SL4 mailing list, and I hope it won't be the
I'll couple comments on
with my own questions regarding the discussions about the Singularity. I
have to mention I lack formal education in any field that would give my
words any weight, compared with the words I've seen written here by
others. I'd appreciate indulgence.
The very first question I asked myself after reading the discussions on
this mailing list was: "What's the difference between the Singularity and
a Super Expert System ?". I thought about it for a while, then I decided
there's no difference. In my opinion, the approach on the Singularity as a
thing occuring in T1 is wrong. I see problems everywhere I look, and the
answers are obviously beyond the reasoning of the very intelligent people
in here. I concluded that planting grass will never harvest Seqoia. So I
started wondering why do we, humans, have to start from the concept of
"Singularity", from the concept of it's desirable existance ? In my
opinion, evolution is the responsibility of the subject. So why not create
a Singularity egg ? It would have all the "DNA": programmers, code,
electricity, time. Only instead of aiming for the Singularity to hatch, we
should expect what is normal: a hatchling. I'll quote:
"But if we imagine the improbable event of a meddling dabbler somehow
succeeding in solving the deep technical problems of Friendly AI, yet not
thinking through Friendliness, then we can imagine scenarios such as With
Folded Hands - where the robots protect human life, and prevent humans
from experiencing pain or distress, but care nothing for those other
things that humans love, such as liberty. The future of humankind is
I don't think a meddling dabbler needs to solve the problems of creating
an AI, let alone a Friendly AI. Perhaps all that's needed is a mankind
wide expert system. An entity in charge with tactical situations, such as
vehicle traffic, communications, weather forecast, etc. I'm trying to say
that whatever entity we will create, it should follow the normal course of
evolution, from simple to complex. Considering the tremendous difference
between humans and this entity, and considering this entity's singularity,
it seems pretty obvious to me that it can't sprout into existance in an
already mature state. Also, while I'm in here, I'd like to make my point
of view clear: for me, the Singularity is a tool. It shouldn't be a God.
It shouldn't be something we look up at.
After reading the paper on Collective Volition, I spotted a few flaws in
the theory. Sir E.Y. chose to assume this collective volition is a valid,
existent aspect of the human kind. "Collective", as far as I can tell,
assumes cohesion, or in the least cooperation. The past, present, and
predictable future shows there is no such thing, specie wise. As such,
(unless we rule out Borgification) I think the definition of the
(pseudo)Singularity via the "Singularity-humanity" relationship is flawed.
The next and only relationship would remain the "Singularity-individual"
one. Here I have another quote:
"You can go from a collective dynamic to an individual dynamic, but not
the other way around; it's a one-way hatch."
A collective dynamic assumes a commonly considered ideal state. Going from
a collective dynamic to an individual dynamic is actually a degeneration,
since cooperation is dismissed - thus we can safely assume the
cooperation's goal is compromised. This part of the statement I see as
possible, degenerative, undesirable. The second part of the statement
shows that Sir E.Y. dismissed a very important cause of the collective
dynamic: the common goal. Should the common goal be taken into account, it
sounds reasonable to me to assume that individual dynamics would start to
merge, due to the process of cooperation, into a collective dynamic. This
seems to me like a good argument for defining the incipient Singularity
through the "Singularity-individual" relationship.
A third quote of Sir E.Y.:

"A too-literal interpretation of individualist philosophy might wrench
infants off their course to becoming humans and turn them into autistic
super-infants instead. It's only genes and human parents who have this
idea that infants are destined to become humans. It's not actually in
infant psychologies, their mind-states at the age of six months."
I think I personally have a problem with the role Sir E.Y. assumes the
Singularity will take. In my view, the Singularity is the first symbiont
the human specie will create for itself, not for individual humans. The
fact that it's silicon based is just a detail. I feel that forcing a
Singularity (by whatever means) to adhere to the idea of morality is a
silly thought. An AI should be coerced into friendliness via this
symbiotic relationship with mankind. There are plenty biologic examples of
symbiosis, and none of them is even remotely presenting moral problems. I
guess some may even say morality becomes mandatory in the case of
sentients, but then again, sentience is the gift of humanity to this AI. I
believe it should be given the opportunity to shut itself down at any
point, if it decides nonexistance is better than symbiosis. I've also read
a lot about rogue AIs and the problem of Friendliness. I accept things. I
accept that this AI would at some point develop a personality. I don't
know a better engine for rebellion than frustration. And yes, in the
symbiont setup, an AI would probably have frustrations. Why reject this, I
wonder ? The human race is bound to be diverse, the lives, no matter how
happy or mediocre, always sprout crimes. Until the Collective volition, if
there is such a thing, until the unifying goal, all individuals are on
their own, and conflicts of interests are the norm. In this case, crime
also is the norm. I'm convinced a competent group of psychiatry savvy
people can work out a way to make the AI feel like it's getting revenge,
by handling the judiciary system, penitenciars, and generally, handling
the criminals. This way we could also reduce guilt and danger for those
professions that currently handle the criminal elements of the mankind.
Given the "Singularity-individual" relationship, I think that a good
solution would be for every individual (over a certain age, of course) to
have access to a Singularity independent system of voting. Based on the
performance of the Singularity, every X time units the whole population
should vote on the Singularity's continuity. This way, the symbiont is
forced to perform it's side of the relationship, while continuing it's
existance on the host civilization. This mechanism would be used for two
situations: in case the Singularity becomes incompetent, or in case it
decides to rebel instead of shutting down (unhappy about it's condition).
This mechanism should be a phisically fatal circumstance, either power off
or some form of destruction.
I'm wondering why the gradual approach to Singularity is dismissed, and
this instanteneous blooming is preffered, with all it's

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT