Re: How hard a Singularity?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jun 22 2002 - 13:38:24 MDT


Ben Goertzel wrote:
>
>> I suppose I could see a month, but anything longer than that is pretty
>> hard to imagine unless the human-level AI is operating at a subjective
>> slowdown of hundreds to one relative to human thought.
>
> I understand that this is your intuition, but what is the reasoning
> underlying it?

That there is *nothing special* about human-equivalent intelligence!

 From LOGI:
>>
>> Once AI exists it can develop in a number of different ways; for an AI to
>> develop to the point of human-equivalence and then remain at the point of
>> human-equivalence for an extended period would require that all liberties
>> be simultaneously blocked at exactly the level which happens to be
>> occupied by Homo sapiens sapiens. This is too much coincidence. Again,
>> we observe Homo sapiens sapiens intelligence in our vicinity, not because
>> Homo sapiens sapiens represents a basic limit, but because Homo sapiens
>> sapiens is the very first hominid subspecies to cross the minimum line
>> that permits the development of evolutionary psychologists.

> Say we have this AI mind with a nonhuman intelligence, roughly as smart
> as Ben or Eliezer. Say this AI mind already uses a huge amount of
> computational resources, and obtaining more rapidly is not financially
> possible.

You are now furthermore assuming that our AI can find no sufficiently
remunerative employment, cannot borrow sufficient funding, cannot get a
large number of donated cycles from interested scientists, cannot rent a
computing grid for long enough to expand its mind and reengineer itself,
cannot (or chooses not) to steal cycles, cannot design new hardware...

The problem, as I said, is that for an AI to bottleneck at human
intelligence all liberties must be *simultaneously* blocked.

> This mind now has to re-engineer its software to make itself smarter.

By hypothesis, the AI just made the leap to human-equivalent smartness. We
know from evolutionary experience that this is a highly significant
threshold that opens up a lot of doors. Self-improvement should be going
sixty at this point.

> Maybe there are only a limited number of tweaks it can make to improve
> its intelligence, without totally rearchitecting itself.

Then why wouldn't it totally rearchitect itself?

> So, with these tweaks, it becomes a bit smarter than Ben or Eliezer.
>
> OK, what's next?

Every single decision that Ben or Eliezer made, while creating the AI,
becomes open to reconsideration at that higher level of intelligence.

> It has to completely rearchitect itself, i.e. come up
> with a new and better AI design. Furthermore, it doesn't have that much
> hardware available for experimentation, unless it wants to cannibalize
> its own mind-hardware...

It can experiment on arbitrarily small pieces of itself.

> Where do you come up with a "one month upper bound" for this
> rearchitecture process?
>
> I think a one month estimate is plausible, but I don't see why "anything
> longer than that" should be "hard to imagine."

Because of what I see as the earthshattering impact of an AI transforming
itself to one intelligence grade level above "Ben or Eliezer". The doors
opened by this should be more than enough to take the AI to serious
transhumanity. In many ways humans are *wimps*, *especially* when it comes
to code! I just don't see it taking all that much effort to beat the pants
off us *at AI design*.

Perhaps your differing intuition on this has to do with your belief that
there is a simple mathematical essence to intelligence; you are looking at
this supposed essence and saying "How the heck would I re-engineer whatever
the mathematical essence turns out to be? It's an arbitrarily hard problem;
we know nothing about it." But I do not believe intelligence has a simple
mathematical essence. I am looking at the complex system which implements
human intelligence and saying: "I can see how this system produces
intelligence, and it's a beautiful piece of crap, but it's still a piece of
crap."

> Maybe it won't go this way -- maybe no conceptual/mathematical/AI-design
> hurdles will be faced by a human-level AI seeking to make itself vastly
> superhuman. Or maybe turning a human-level mind into a vastly superhuman
> mind will turn out to be a hard scientific problem, which takes our
> human-level AI a nontrivial period of time to solve....

Which all sounds reasonable until you realize that there's nothing special
about "human-level" intelligence. If, under our uncertainty, the AI
trajectory with a big bottleneck between "human-level" and "superhuman"
intelligence is plausible, then the 40 other trajectories with big
bottlenecks between various degrees of infrahuman and transhuman AI are
equally plausible. From *our* perspective this looks like a direct jump
from human-equivalent to transhuman AI. Arguing a privileged bottleneck at
the human level is *not* just as plausible as anything else, just as
Pascal's Wager involving a privileged Christian God is a rationalization
rather than a good bet.

>> Even if your goal is to progress exponentially in enlightened spiritual
>> directions, exponential physical progress is still a good way to get
>> the computing power to support that enlightened spiritual stuff and
>> bring others in on the fun.
>
> Perhaps, or perhaps not. Perhaps the super-AI will realize that more
> brainpower and more knowledge are not the path to greater wisdom ...
> perhaps it will decide it's more important to let some of its
> subprocesses run for a few thousand years and see how they come out!

Okay, now you say that and see something we "just don't know". I hear you
say that and what I see are a specific, highly anthropmorphic and even
contemporary-culture-morphic memes about "wisdom", and how wisdom relates to
ostentatious ignorance of material things, the wisdom of inaction, stopping
to eat the roses, and so on.

> We don't yet fully understand how hard the scientific problem of creating
> a human-level AI is. And we don't yet fully understand how hard the
> scientific problem of transforming a human-level AI into a vastly
> superhuman-level AI is. Until we understand these things, we can't
> forecast the end-game of the path to the Singularity in any detail,
> though we can certainly huff and puff about it a lot should we find such
> an occupation entertaining...

By asking about "human-level" and not any of the twenty surrounding equally
plausible intelligence gradients, you are manipulating your uncertainty to
support one answer, just as Pascal expressed his uncertainty about the
existence of the Christian God and not the millions of other possible deities.

I see your uncertainty about AIs chanting mantras in the same way; you're
being "uncertain" about something that has very specific cultural imagery
behind it.

Uncertainty is very easy to manipulate, and it's very easy to get away with
socially. Can't be anything wrong with admitting you don't know, right? So
why not admit that you don't know whether the stars control your fate, and
read your horoscope just in case? I don't understand how you can be so
"dead certain" about such a thing when millions of people disagree with you...

Being "uncertain" is the easy way out. "Uncertainty abuse" is a major
source of modern-day irrationality. It's socially acceptable and is
frequently mistaken for rationality, which makes it doubly dangerous.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT