From: Sebastian Hagen (sebastian_hagen@gmx.de)
Date: Sat Sep 25 2004 - 13:22:56 MDT
Nick Bostrom wrote:
> On a vaguely related note, I've had a half-written essay lying around
> since 2001 on how some "upward" evolutionary trajectories could lead to
> dystopian outcomes, which I've now finally completed:
> http://www.nickbostrom.com/fut/evolution.html.
>
note: the following quotes are taken from this essay.
"And we can surely assume that at least some current human individuals
would upload if they could, would make many copies of themselves if
they were uploads, and would happily embrace outsourcing or forego
leisure if they could thereby increase their fitness."
Disclaimer: I would probably forego leisure, and quite possibly embrace
outsourcing if I had the chance to reliably modify my own source code.
Making a lot of active copies does not seem like an effective strategy
to me, aside from making (inactive) backups I would probably use
available computronium mostly to expand my own mind.
Copying would probably be effective if there were several "lumps" of
available computronium that had significant communication delays between
them, but lacking that just expanding the own mind appears more effective.
Still, given the chance, self-optimizing myself to a
"All-Work-And-No-Fun"-mind may well be among the first things I'd do
after uploading, so some of my possible future selves would likely be
penalized by an implementation of your suggestions. My opinion on those
suggestions is therefore probably biased.
"Some highly complex entities such as multinational corporations and
nation states contain human beings as constituents. Yet we usually
assign these high-level complexes only instrumental value.
Corporations and states are not (it is generally assumed) conscious;
they cannot feel phenomenal pain or pleasure. We think they are of
value only to the extent that they serve human needs."
I don't think that this is the case simply because these entities have a
significantly higher complexity than human minds. Since human beings do
have rather slow and in many ways ineffective input/output channels,
communication of humans with their environment is a lot more limited
than communication within their own mind. As human beings are the most
important constituents of the mentioned large entities, different parts
of those organizations can only communicate relatively inefficiently
with each other, and are often out of sync. This situation is compounded
by many humans regarding their involvement in those organizations only
as a subgoal, as opposed to a goal on the highest level of their goal
system; in other words, they don't really want to have their goal system
synced to the goals of the organization either. This severely limits the
effectiveness of those organizations. Different parts of them may, and
often do, work against each other, based on different availability of
information or different goals on the level of those parts.
Most of these characteristics do not necessarily apply to intelligent
processes with significantly higher-than-human complexity, and since
they impair efficiency they are subject to being optimized away.
"We can thus imagine a technologically highly advanced society,
containing many sorts of complex structures, some of which are much
smarter and more intricate than anything that exists today, in which
there would nevertheless be a complete absence of any type of being
whose welfare has moral significance."
I don't understand this last statement. Why do we have the right to
declare absence of moral significance without even being capable of
understanding either the society described, or understanding objective
morality (if there is anything like that)? The society may be decidedly
unhuman, but this alone is imho not any justification of declaring it
morally insignificant. What is so special about humans that makes them
morally significant in contrast to the other systems you describe?
"Perhaps what will maximize fitness in the future will be nothing but
non-stop high-intensity drudgery, work of a drab and repetitive
nature, aimed at improving the eighth decimal of some economic output
measure. Even if the workers selected for in this scenario were
conscious, the resulting world would still be radically impoverished in
terms of the qualities that give value to life."
Why? I don't understand why the activities mentioned before the quoted
part ("humor, love, game-playing, art, sex, dancing, social
conversation, philosophy, literature, scientific discovery, food and
drink, friendship, parenting, sport") are relevant for the value of
human life. At least some of them are useful in current human societies
for practical reasons, but I don't think that this justifies an
assumption of them having objective moral significance.
From the perspective of an agent, that is striving to be
non-eudaemonic,(me) the proposed implementation looks like something
that could destroy a lot of efficiency at problem-solving. If a
(renormalized) Collective Volition came to the conclusion that this
is a good idea I'd respect it (since it would have been made by
transhuman minds that originally would have been human ones), but
human-level minds forcing this kind of two-class-society on transhuman
ones appears like a very bad idea to me.
Sebastian Hagen
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:46 MST