RE: How hard a Singularity?

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jun 22 2002 - 16:00:46 MDT


> Ben Goertzel wrote:
> >
> >> I suppose I could see a month, but anything longer than that is pretty
> >> hard to imagine unless the human-level AI is operating at a subjective
> >> slowdown of hundreds to one relative to human thought.
> >
> > I understand that this is your intuition, but what is the reasoning
> > underlying it?
>
> That there is *nothing special* about human-equivalent intelligence!

Wrong! There is something VERY special about human-level intelligence, in
the context of a seed AI that is initially created and taught by humans.
Human-level intelligence is the intelligence level of the AI's creators and
teachers.

Our seed AI is going to get its human-level intelligence, not purely by its
own efforts, but largely based on the human-level intelligence of millions
of humans working over years/decades/centuries.

The process by which the seed AI gets human-level inteligence is going to be
a combination of software engineering, self-organization, explicit teaching
by humans, and explicit self-modification by the AI.

The process by which it gets from human-level intelligence to superhuman
intelligence will probably be different: teaching by humans will be less
useful, software engineering by humans may be more difficult... so it's left
with self-organization and explicit self-modification.

I am not at all saying that human intelligence is some kind of basic limit.
I am saying that becoming smarter than your creators and teachers is harder
than becoming AS SMART as them, because you no longer have as much help
along the path.

It seems that sometimes you attempt to refute one of my arguments by
recycling a refutation you have previously created to use against another,
different argument (usually a much more foolish one than the one I'm
actually making!).

> You are now furthermore assuming that our AI can find no sufficiently
> remunerative employment, cannot borrow sufficient funding, cannot get a
> large number of donated cycles from interested scientists, cannot rent a
> computing grid for long enough to expand its mind and reengineer itself,
> cannot (or chooses not) to steal cycles, cannot design new hardware...

No. I am not assuming that the AI cannot do these things.

I am assuming that the AI may take more than your posited ONE MONTH UPPER
BOUND to do these things, and to use these resources it has thus obtained to
turn itself into a superhuman intelligence.

> > This mind now has to re-engineer its software to make itself smarter.
>
> By hypothesis, the AI just made the leap to human-equivalent
> smartness. We
> know from evolutionary experience that this is a highly significant
> threshold that opens up a lot of doors. Self-improvement should be going
> sixty at this point.

A metaphor like "going sixty" is not very convincing in the context of a
quantitative debate about the rate of a certain process ;> Especially
because 60 is a slow speed limit. Here in New Mexico we can legally drive
75 ;->

> Because of what I see as the earthshattering impact of an AI transforming
> itself to one intelligence grade level above "Ben or Eliezer". The doors
> opened by this should be more than enough to take the AI to serious
> transhumanity. In many ways humans are *wimps*, *especially*
> when it comes
> to code! I just don't see it taking all that much effort to beat
> the pants
> off us *at AI design*.

You may be right. However, creating and implementing a design for a
superhuman AI in ONE YEAR rather than ONE MONTH would still definitely
qualify as "beating the pants off us at AI design." Maybe even the
undergarments too!

> Perhaps your differing intuition on this has to do with your belief that
> there is a simple mathematical essence to intelligence; you are
> looking at
> this supposed essence and saying "How the heck would I
> re-engineer whatever
> the mathematical essence turns out to be? It's an arbitrarily
> hard problem;
> we know nothing about it."

Of course, this is not my perspective. You should know my perspective
better than that by now. I do think there is a very simple mathematical
essence to intelligence, but I don't think there is any simple mathematical
essence to *achieving a certain degree of intelligence within certain
resource constraints*. And I do not think that I know nothing about the
essence of intelligence; I think I know a great deal about it!!

> > Maybe it won't go this way -- maybe no
> conceptual/mathematical/AI-design
> > hurdles will be faced by a human-level AI seeking to make itself vastly
> > superhuman. Or maybe turning a human-level mind into a vastly
> superhuman
> > mind will turn out to be a hard scientific problem, which takes our
> > human-level AI a nontrivial period of time to solve....
>
> Which all sounds reasonable until you realize that there's
> nothing special
> about "human-level" intelligence. If, under our uncertainty, the AI
> trajectory with a big bottleneck between "human-level" and "superhuman"
> intelligence is plausible, then the 40 other trajectories with big
> bottlenecks between various degrees of infrahuman and transhuman AI are
> equally plausible.

No, because in getting up to human-level intelligence, the system has our
help, and we have human-level intelligence. We can teach it. We cannot do
nearly so good a job of teaching a system to be vastly smarter than
ourselves! There may well be other counterbalancing factors, but this
factor in itself would cause a slowdown in intelligence increase once the
human level is passed.

> > Perhaps, or perhaps not. Perhaps the super-AI will realize that more
> > brainpower and more knowledge are not the path to greater wisdom ...
> > perhaps it will decide it's more important to let some of its
> > subprocesses run for a few thousand years and see how they come out!
>
> Okay, now you say that and see something we "just don't know". I
> hear you
> say that and what I see are a specific, highly anthropmorphic and even
> contemporary-culture-morphic memes about "wisdom", and how wisdom
> relates to
> ostentatious ignorance of material things, the wisdom of
> inaction, stopping
> to eat the roses, and so on.

I think that the possibility I raised in the paragraph you responded to is
very unlikely, but I also think it shouldn't be ruled out. Of course, the
possibilities I think of are biased by my human nature and cultural
background. There are many other possibilities that are less humanly
natural, that would still have similar results.

> Being "uncertain" is the easy way out. "Uncertainty abuse" is a major
> source of modern-day irrationality. It's socially acceptable and is
> frequently mistaken for rationality, which makes it doubly dangerous.

Uncertainty can be used as a psychologically "easy way out", but so can
overconfidence in one's intuitions and opinions.

I don't think that I suffer from a paralyzing excess of uncertainty, not at
all. I tend to make my best guess and then act on it. I don't know 100%
that the Novamente design will work (you think it won't) ... but my best
guess is that it will, so I'm spending most of my time on it.

On the other hand, I think it is possible that YOU suffer from an excessive
confidence in your own intuitions and opinions and your own potential
historical role.

I see much less "uncertainty abuse" around me, than I see "overconfidence
abuse" -- people being very narrow-minded and overconfident in the things
they've grown up believing, and not willing to doubt their beliefs or
consider other ideas.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT