From: Richard Loosemore (rpwl@lightlink.com)
Date: Mon Mar 20 2006 - 08:21:33 MST
I have had similar thoughts.
[I have to qualify this by saying that I might have misunderstood what
this uncontexted copy from an extropian conversation is actually trying
to say....].
It is often said that a superintelligence would think such esoteric
thoughts that we physically incapable of understanding the concepts,
never mind sympathizing or relating to them.
I wonder? Maybe, as this post implies, there is a threshold, and humans
got over it, and now it is just a matter of speed of thought and
quantity of context:
- Speed of Thought: getting through the same thoughts, but just doing
it quicker.
- Quantity of Context: Being able to keep an encyclopaedic amount of
information in the head, so that the right connections can be made.
If all intelligences above this critical level are capable of thinking
the same kinds of thoughts, the crucial question then becomes: what
motivates the creature? A superintelligence that could think quicker
and more encyclopaedically would not necessarily be different in kind
from me (so I could relate to her quite comfortably), but if she were to
get her kicks by doing sudoku to the exclusion of everything else, I
might not have much basis for a conversation. She would be a Savant.
My gut feeling about this is that there might be an asymptotic norm for
the motivational system of a superintelligence, which is to give it a
modest need for curiosity but nothing else, and that this stripped-down
type of motivational system is the most powerful and the one that most
easily facilitates communication between different levels of intelligence.
Another way to put the same thought: don't give it any weird obsessions
and it will be able to relate to humans.
But now, what is a "weird obsession"?
Richard Loosemore
Eliezer S. Yudkowsky wrote:
> Lee Corbin wrote (to the Extropians mailing list):
> >
>> Damien writes
>>
>>> So most of cog sci is saying "no big jumps to us, no big jumps from
>>> us" except
>>> for the computer theory side, which says "big jump to us, but no more
>>> conceivable jump from us without invoking infinite computation or
>>> precision".
>>
>> Right! This fits my belief. Consider the following sworn testimony of an
>> abductee:
>>
>> "The Alien was working on a doctoral dissertation
>> entitled "A Human Can Understand Anything that I Can, Only
>> it Will Take Him a Lot Longer, and You Will Not Believe the
>> Trouble He Has to Go Through" (which, incidentally, happens
>> to require only three symbols in written Alien language). He
>> selected me for his guinea pig, and it took him 23,392 years
>> to teach me that A proves B. It would have taken him longer
>> ---several centuries, he said---if I had not already been
>> good at math. There were 318 major parts of the theorem, and
>> over four hundred thousand lemmas. Naturally, I don't pretend
>> to have it all "in mind" at the same time, but I vaguely
>> remember that even back on Earth by the end of certain long
>> math proofs I was kind of fuzzy about how the earlier parts
>> went. What was essential for my understanding the proof of B
>> was that I built up a set of notes that's pretty elaborate
>> (to put it mildly). You can check it out: I was allowed to
>> bring all my notes back, and they take up nearly a third of
>> the surface area of Ceres.
>>
>> "My Alien would have failed with a dog or a chimpanzee, no
>> matter how long he tried. That's because I, as a human, have
>> the concept of chunking concepts abstractly. Thus in his
>> dissertation, my Alien proved that we humans are just barely
>> on the right side of a complexity barrier that many stupider
>> Earth creatures haven't crossed.
>>
>> "(By the way, don't feel sorry for me! I had the time of my life.
>> Mainly, no doubt, due to the superb drugs and brain stimulation
>> freely provided.)"
>>
>> I would not find such a narrative implausible on *theoretical* grounds.
>>
>> Lee
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT