Re: New Term: Apexmind?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Dec 05 2001 - 06:47:04 MST


Damien Broderick wrote:
>
> I wasn't *really* suggesting `Prefect', you understand (although it amuses
> me that it's an anagram of `Perfect'); I was gesturing toward the nature of
> the beast under discussion. `Monitor' is another possibility; as are
> `Custodian', `Protector' and `Steward'. The last of these is perhaps the
> most general and least offensive... although the idea itself remains rather
> offensive however it's parsed.

Doesn't that last remark say it all...

If you think the idea has offensive consequences, you're going to pick a
term that has connotations that remind you of those offensive
consequences. Thus, from our perspective, short-circuiting the process of
rational argument, which should start with a morally neutral term
describing what the hypothesis *is*.

A Sysop is not a Prefect, Monitor, Custodian, Protector, or Steward. At
most it might be a Protector, and the reason this term is salvageable is
that a "protector" does not specify how something is being protected, or
for what motive; the other terms all have specific connotations of
political and moral authority in a human social context.

The Sysop is not a human mind. If it were, most of this would be nonsense
and the rest would be actively dangerous. This is something that becomes
possible only when you step outside the realm of evolved minds and start
considering what a mind-in-general can be asked to do. If you import
terms that have specific connotations and meanings in the context of human
society, you are anthropomorphizing the whole situation; you have sucked
all the interestingly alien aspects out of it.

In this sense, Gordon Worley's Unix Scenario, in which "root" is not a
*conscious* process, is psychologically superior to the Sysop Scenario; it
is less likely to be confused with human ideas of gods, fathers, and other
extrema of the "tribal chief" concept. Unfortunately I also think the
Unix Scenario version is less plausible, but that's a separate issue.

Humans have a phobia of minds, which unfortunately extends from human
minds (where it is justified) to minds in general (since no other mind
types were encountered in the ancestral environment). Someone looking
over Gordon Worley's Unix Scenario says "Hm, underlying reality works
according to certain definite physical rules; there are no minds here; I'm
probably safe." Someone in a Sysop Scenario is just as likely to be safe,
but the human instincts look over the Sysop Scenario and say: "There is a
mind here; that mind is likely to act against me," or even worse, "There
is a mind here; this mind is an extrema of concept tribal-chief;
therefore, this mind will boss me around." The motivations of a nonhuman
superintelligence that does not *want* to boss you around can be just as
solid a safeguard as an absolute physical impossibility of interference.
The fact that your sexual habits are of absolutely no concern to the
singleton substrate mean that your midnight assignations might as well be
outside the light cone of the solar system; the only difference is that
nobody else can interfere with you either.

Outside the human realm, dealing with real extremes of cognition instead
of imagined extremes of social categories, superintelligent motivations
can be just as solid and impartial as physical law. Maybe, to reflect
this, we should skip both Sysop Scenario and Unix Reality and go straight
to discussing Michael Anissimov's ontotechnology scenarios. For some
reason there's a rule that says you can't hurt someone without their
consent. Is it because the Sysop predicts a violation of volition?
Because the low-level rules of Unix Reality don't permit the physical
interaction? Because, back in the dawn of the Singularity, the first
Friendly SI made some quiet adjustments to the laws of physics? Because
of something entirely unimaginable? What difference does it really make,
except to human psychology?

If it's theoretically possible for transhumans to retain motivations that
would make them hostile toward other transhumans, then a possible problem
exists of transhuman war or even transhuman existential catastrophe; but,
there exists at least one comprehensible proposed solution to this
problem, and it is therefore disingenuous to present it as unsolvable.
Maybe totally unrestricted technology for everyone in the universe,
including humans who've refused intelligence enhancement and still have
their original emotional architectures, won't threaten the welfare of one
single sentient being, for reasons we can't now understand. But if not,
we know what to do about it. That's all.

The utility of discussing the Sysop Scenario is this: that we retain the
ability to say "There are no known unsolvable problems between us and the
Singularity". Nothing more. It's a *prediction*, not a *decision*;
whether Unix/Sysop/whatever is actually needed would be up to the first
Friendly SI.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT