Re: Game theoretic concerns and the singularity (was RE: Are we Gods yet?)

From: James Higgins (
Date: Fri Aug 02 2002 - 12:51:44 MDT

Michael Anissimov wrote:
> James Higgins wrote:
> Days? I would estimate that the window of difference allowed for two
> transhumans newly initiating a cycle of strong self-improvement to
> maintain game-theoretic equivalency would be somewhere in the
> nanosecond or microsecond range. In addition, I would find it highly

Depends on the actual rate of progress and at which point, exactly, the
intelligence it considered to be an SI vs transhuman. The fact is that
any discussion of these matters is mostly opinion based on a handfull of
(possibly) related facts. I was being conservative by using days, since
I would imagine that being behind by more than days (if we see hard
takeoff with exponential progress) could easily make any lagging AIs

> likely that posthuman SIs would choose to reconcile their differences
> (likely to be minor, in terms of goals) rather than burning resources
> in a mutually wasteful physical conflict. Looking at your rhetoric,

Well, it is likely that it would take an insignificant amount of
resources for an SI to halt or destroy even thousands of trans-humans.
Much like it takes an insignificant amount of human resources to keep a
large number of Apes (and the like) confined in zoos.

As stated many times previously on this list, it is impossible to say
anything about what an SI is, or is not, likely to do. So theorizing
that it is "likely that posthuman SIs would choose to reconsile their
differences" is pure speculation. Also, why is it likely that their
differences (in terms of goals) will be minor? Please explain how you
determined that?

Also, while it has little effect on me personally, I'd suggest not using
terms like "rhetoric" unless you are purposfully trying to piss someone off.

> you seem to be thinking in terms of "all sentients will necessarily
> have strong observer bias", which, of course, is dangerous when
> seriously considering the motivations of entities outside of the
> familiar phase space.

Well, as much as possible, I try not to consider what SIs will be like
(since it is impossible to know). Transhumans are a tiny bit easier (at
least at the lower end of the spectrum), but it is still virtually
impossible to predict much, if anything.

What I was trying to point out is that, unless two or more SIs are at
least within days of each other, the SI that gets there first calls all
the shots. That SI will be able to do whatever it likes with any
transhuman AIs and even with us. I was not trying to speculate on what
it would do, just what it could do.

> Wow, you've got a powerful Us/Them complex going on when you talk about
> SIs. You talk as if all humans upgrading to Powerhood isn't the only
> long-term inevitability, as if an indifferent SI could come into

I think your trying to read too much into my grammar. I believe we will
create *one* SI (unless that SI wants company). I also believe that if
it is possible here, it will occur elsewhere in the universe. Thus the
SI created by us will likely encounter other (non-terran) SIs.

Also, I don't at all think that "all humans upgrading to Powerhood" is
at all inevitable. It is probably more likely that we won't be
upgraded. This depends on the goals of the first SI of course, which we
can't predict.

> existence but not see mankind as building blocks, and as if the first
> benevolent transhuman won't create a moral singleton to protect
> individual rights (in the case of a malevolent or indifferent
> transhuman, everything goes black immediately). Out of curiosity, may

There are many problems here. First, this "moral" singleton is moral in
who's eyes? It is incredibly unlikely that any single morality would be
acceptable to all humans. So, best case, is that most humans consider
this SI moral, but even then a large number will still consider it imoral.

Second, if we get transhumans before an SI, the odds that we will get a
single SI go way down. If we somehow go the transhuman route (instead
of the AI route) then humans will eventually become SIs. And it is much
more likely, on that path, that we get a large number of SIs occuring at
the same relative time. It is difficult to predict how this might end
up, could be good or bad. Depends on the nature of most of the SIs,
which we can't predict.

There is a problem with having a single SI that oversees / protects a
population of lesser intelligences. Should it ever encounter another SI
which is violent it may have great difficulty defending itself since it
would have to spend significant energy being "moral" to its charges.
And, depending on its concept of morality, it may not be able to defend
itself at all. This has been previously discussed on the list extensively.

> I ask you if you've mulled over any of these concepts before? Also,

Yes, quite a lot actually.

> even if Earth-originating SIs ran into extraterrestrial SIs, wouldn't
> that potential occurence be so insanely far into the subjective future
> as to render it irrelevant to us today?

I don't know, do you consider 20 (real time) years from now to be
irrelevant? Is there some magic way we can predict when such an event
could occur? The presence of extraterrestrial SIs could become aparent
within micro-seconds of a terran SI forming. In any case, I don't see
how such a significant event could be considered irrelevant.

James Higgins

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT