RE: upper theoretical limits

From: Ben Goertzel (ben@intelligenesis.net)
Date: Tue Oct 03 2000 - 19:35:33 MDT


It seems very obvious to me that, for any given hardware setup, there is a
limit to the
intelligence of any system that can run on this hardware.

Of course, you may say that an AI system, using the whole universe as its
auxiliary memory,
can extend its intelligence by building itself extra brain, if it's smart
enough.... But my
conjecture is that for any particular piece of hardware H, if a mind is
using H as the physical
substrate for its thinking, then there is an upper limit to the intelligence
of this mind.

Of course, to prove this mathematically requires one to have a mathematical
definition of
intelligence. For instance, if one defines intelligence as "the ability to
achieve complex
goals in complex environments," then this follows according to any
algorithmic-information-based
definition of complexity.

If one wants to allow for AI systems incorporating more & more of the
universe into their brains,
then ultimately we arrive at the question of whether the universe is finite.
If so, there is
a (presumably very large) upper bound to the intelligence of any entity in
the universe.

In practice, I think that the upper limit for intelligence achievable on,
say, the PC on my desk
is pretty small.

Yeah, this PC is a universal turing machine, if it makes use of N floppy
disks for its "memory tape,"
but intelligence isn't just about theoretical computing power, it's about
computing power ~in time~ --
and looking things up on these N floppy disks will slow the system down
enough that eventually, the k'th
floppy disk will not enhance its intelligence any more.

My guess is that about a terabyte of RAM is required to get human-level
intelligence (serviced by
an appropriate number of processors; not, at current processor speeds, just
one or even just a handful).
This is based partly on a priori calculations and partly on experimentation
with our current 100 Gig RAM network.

-- Ben G

> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Alicia Madsen
> Sent: Tuesday, October 03, 2000 6:35 PM
> To: sl4@sysopmind.com
> Subject: upper theoretical limits
>
>
> This upper theoretical limit people speak of, does it all go back
> to how well
> humans grasp the concept of infinity? As a college student, I
> have just begun
> to grapple with calculus, and am not familiar with its peculiarities. In
> replying to my post, explain as much as possible about holes in my logic.
>
> Peter Voss said recently on sl4@sysopmind.com " agree with you,
> that here we
> are in intuition' territory. My own approach to AI design leads
> me to believe
> that at a certain point of intelligence there will be enough of
> an exponential
> burst for one system to dominate. I don't think that hardware
> will be a major
> limiting factor. On the other hand, perhaps each type of
> intelligence has its
> own upper theoretical limit. If so, I haven't yet identified it."
>
> Perhaps if one "type" of intelligence reaches its limit at a
> certain number n,
> and another "type" reaches its intelligence limit at a certain
> number k, then
> all that must be done is rewrite their functions so that they are
> continuos
> together at a new limit. My question is if we have these
> webminds, and they
> are capable of rewriting their programs so that they can
> continually increase
> their capabilities, and then work together, why worry about upper
> limits? I do
> not think that an "upper" limits will exist, as you speak of an
> exponential
> growth rate, and thus continuous everywhere, with this capability to be
> rewritten favorably.
>
> This is why I think that one AI or even "webmind" as it is called
> will depend
> on each other, and for its own survival will not "dominate" the
> others. In my
> opinion, an AI is like all other AIs and thus only one of them in
> the first
> place, especially because they will be sharing information. It is
> true that
> there are many parallels of the AI system and humanity, because
> we are friends
> with the logic of Darwin, as we are trapped in the existential
> circumstance
> thrusted upon us.
>
> But Darwin is also not limiting, only a tool we choose to use,
> and we are not
> to fear this tool. In my culture (Inupiaq Eskimo) there are
> examples from past
> and present of elders leaving the small community when food is
> scarce, and
> wander off to die so that the community may survive. I think that because
> humanity has the choice and demonstrated the ability to make this
> choice of
> suicide, then an AI system will also have this choice, as we are
> in the same
> condition. A human interface with the baby AI or webring will not
> jeopardize
> it because we cannot lie to it.
>
> Thus my opinion is that AIs depend on each other for survival,
> and are also
> not limited in intelligence, as well as not limited by their existential
> circumstance.
>
> I follow Eliezer Yudkowsky's logic that we cannot lie to an Ai,
> at least not
> for long, because its logical flaw will be spottable. So it will
> not be an
> issue. What I find interesting, is this concept of AIs having familial
> relationships, although I do not think it is of much importance
> in the long
> run towards an SI. If humans are able to interface with the AI
> and "webrings"
> then we will shape the graph of their intelligence in the
> beginning, and so I
> do not worry about AIs having moral dilemmas because of the
> guidance it will
> receive from its human interface, or even falling out of the
> community of AIs
> and "dying". With the development of nanotechnology well
> underway, and also
> the presence of many interested individuals and organizations in
> AI, I have no
> fear that an SI will not eventually exist as the ratrace has
> already begun.
>
> Alicia Madsen
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT