From: Chris Capel (pdf23ds@gmail.com)
Date: Wed Jul 13 2005 - 12:49:35 MDT
On 7/13/05, justin corwin <outlawpoet@gmail.com> wrote:
>
> Suppose I,
> Eliezer and other self improvement enthusiasts are quite wrong about
> the scaling speed of self improving cognition. We might see AGI
> designs stuck at infrahuman intelligence for ten years(or a thousand,
> but that's a different discussion). In that ten years, do you think
> that even a project that started out as friendly-compliant(whatever
> that means) would remain so?
Sorry to fork the thread, but this really interests me. This seems
unlikely, but what if the limited intelligence of humans is due not
mainly to the firing speed limitations of neurons, but due to some
architectural limitation in the human brain that would persist even in
a super-fast thinking AI? Just because an AI has thousands of
subjective years to think in every minute of our time doesn't mean
that the AI would necessarily have a memory able to contain thousands
of years worth of memories, or that it would be able to scale up in
synthesizing and organizing the vastly larger amounts of information
than current humans do. It doesn't mean that the AI wouldn't fall prey
to the same problems of boredom and inertial thinking and the myriad
rational errors that humans commit that get them believing in really
confused theories.
Granted, given what I know about the hodge-podge nature of the
organization of the brain, it's unlikely that an AI programmer would
duplicate most of the same problems that humans have in an AI. But if
it's the case that some of humans' intelligence shortcomings are due
to a rather fundamental architectural problem, so fundamental that
it's hard for us to even comprehend it, so fundamental that
intelligences of different architectures would be unrecognizable as
intelligences to us, then that could be a huge crimp in developing an
AI that actually has transhuman intelligence. (Of course, the problems
manifested would be of a different nature than the ones I mentioned in
the first paragraph.) Now, these hypothetical limitations might not
even be ones that actual humans would never run into, because actual
humans don't live that long, and have slow neurons. But an extremely
long-lived human, or a human upload, or perhaps only an AI built with
a very human-patterned architecture, could run into the limitation,
providing a cap on intelligence that would take many times more effort
to overcome than the effort that lead to the initial offering. Or not.
I suppose this is much to speculative to be useful (except in the
context of justin's original post), unless someone has any data that
would actually support these ideas. If we do eventually come across
something like this limiting progress, we'll deal with it then, when
we've understood enough to even duplicate human intelligence, and
certainly not before then. In the meantime, even a pessimistic
estimate of the intelligence of an uploaded human (10-50 times their
intelligence before they were uploaded) might make them quite
dangerous to the world if malicious. So I'll stop wasting everyone's
time now.
Chris Capel
-- "What is it like to be a bat? What is it like to bat a bee? What is it like to be a bee being batted? What is it like to be a batted bee?" -- The Mind's I (Hofstadter, Dennet)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT