Re: Intelligence roadblocks (was Re: Fighting UFAI)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jul 13 2005 - 14:14:26 MDT


Chris Capel wrote:
>
> Sorry to fork the thread, but this really interests me. This seems
> unlikely, but what if the limited intelligence of humans is due not
> mainly to the firing speed limitations of neurons, but due to some
> architectural limitation in the human brain that would persist even in
> a super-fast thinking AI? Just because an AI has thousands of
> subjective years to think in every minute of our time doesn't mean
> that the AI would necessarily have a memory able to contain thousands
> of years worth of memories, or that it would be able to scale up in
> synthesizing and organizing the vastly larger amounts of information
> than current humans do. It doesn't mean that the AI wouldn't fall prey
> to the same problems of boredom and inertial thinking and the myriad
> rational errors that humans commit that get them believing in really
> confused theories.

 From what I know, this is *reaallly* unlikely. It gets more unlikely, the
longer I study it; and conversely, the less one knows about how intelligence
does work, the more anything seems possible - even that the human brain would
not have an incredibly crappy design.

But suppose it were so. What is it that would need to be done? What is there
that humanity could or should be doing to prepare for the possibility? Say
the actions that need to be taken, and perhaps they will look like good ideas
even if the motive were to change. I know that the world would seem a little
safer to me, even from UFAI, if there were a centralized Internet shutdown
switch. It's not so much that it'll protect you from a transhuman entity, but
it might help stop an entity from becoming transhuman. Once you deal with a
transhuman, I really do think you're just screwed.

> Granted, given what I know about the hodge-podge nature of the
> organization of the brain, it's unlikely that an AI programmer would
> duplicate most of the same problems that humans have in an AI. But if
> it's the case that some of humans' intelligence shortcomings are due
> to a rather fundamental architectural problem, so fundamental that
> it's hard for us to even comprehend it, so fundamental that
> intelligences of different architectures would be unrecognizable as
> intelligences to us, then that could be a huge crimp in developing an
> AI that actually has transhuman intelligence.

It seems to me that I already know enough about human intelligence and the
specific origin of its shortcomings to know that this isn't the case. Or
rather, to know that there are specific things which can be fixed to yield
much greater effective intelligence, whatever remained unfixed. But perhaps I
am wrong.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT