Re: "Hostile" transhumans (Re: Magic means large search spaces)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Jul 21 2005 - 15:43:59 MDT


Phil Goetz wrote:
> --- "Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
>
>>Allowing the existence of a hostile
>>transhuman is just plain STUPID, end of story.
>
> Can you define what it means for a transhuman to be hostile
> clearly enough that you don't have to kill all transhumans?
>
> Suppose that transhumans had no particular ill will to
> ordinary humans, but they were very successful in
> everything that they did, so that transhumans were seen
> to be on their way to becoming the only dominant economic
> force, soon to legally control 95% of the cash, property,
> and land in the world, and ordinary humans would be the
> equivalent of Americans without college degrees.
> Would the ordinary humans see these transhumans as hostile?

If they're smarter than Eliezer their only activity of any long-term
importance is building an AI. Sucking up all the money in the world is a side
issue that no one will care about in two hundred million years. And please
note this situation holds if you've got even one >E intelligence in the crowd,
starting from day one of >E's existence. So the "cash, property, and land" or
using the plural in "transhumans", is a distraction that would lead you to
expect the crisis to materialize much later than it actually would.

Note also the creativity, or even apparent magical quality, of building an AI.
  Didn't think of that, didja?

*Any* hostile transhuman is a huge problem, even if s/he is merely a slightly
augmented human, holed up in an apartment in Venezuela coding apparent
gibberish on a laptop with no Internet connection. Though not as much of a
problem as an UFAI running at a million times the human subjective rate. Nor
can we make the default presumption of hostility for the Venezuelan.

> I don't see how Eliezer's viewpoint can pragmatically
> permit the co-existence of humans and transhumans.

Let's rephrase. For *humans* to deliberately allow the existence of a hostile
transhuman is just plain STUPID. (Once again, gotta keep that two-place
predicate from becoming a one-place predicate; no action is "stupid" except
relative to some goal system.)

On the other hand, if I build a CV-type FAI and it goes ahead and permits the
existence of hostile transhumans, I can see any number of ways that could work
out okay. Maybe they all live on Neptune, etc. *You* don't permit something
with hostile goals that is smarter than you are. It's okay to build a
Friendly-type thing that can then permit the existence of hostile transhumans
that are smarter than you and dumber than it. Maybe the rules change with
increasing intelligence, so that an SI can knowably safely permit the
existence of hostile minds bigger than itself, if it has a positional
advantage over them such as root permission on their operating system -
because it *knows* its code is flawless and its model of physics is as good as
it gets, etc. Superior intelligence might or might not always be magic, but
it's magic at this particular level of intelligence.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT