Re: The AIbox.

From: Metaqualia (metaqualia@mynichi.com)
Date: Tue Jul 06 2004 - 07:40:10 MDT


> statement about usefulness and understanding. I don't think of
> understanding as the ability to know all parts of a given field,
> merely those abstractions which fit together to make the whole which
> is useful. We don't need to constantly calculate PI to be able to use

There are two different questions here. The boring one is: will a superhuman
AI ever create concepts that require so many interlocking and essential
pieces of knowledge that the brain will not have enough short term memory
etc. to keep up, and will never get even a pale idea for what the 'thing'
is? I think the answer is probably yes.

The interesting one is: what kind of stuff is impossible for us to model
because of design constraints?? What is outside of the scope of human
general intelligence? Simple but unintelligible concepts that are impossible
for us to represent like color is impossible for a black/white monitor to
represent. For instance qualia, existential stuff. Is there a thing X which
is very simple but which the AI will never be able to explain to us in terms
we understand?

And a more interesting additional question: what kind of stuff is impossible
for any general intelligence (exploiting physical law to the fullest) to
model? Where does intelligence stop being useful? Is there some sort of last
barrier beyond which no physical process can go on representing stuff? (ex.
why qualia, why the universe, why time...)

Is there a word to indicate a latest-stage AI which has already learned to
exploit all of physical law and has optimized all the matter in the universe
for thinking? In other words the maximum intellect allowed by physics. If
such maximum intellect still is incapable of answering the big questions,
that is one more reason to say screw it all, and convert everything back to
orgasmium :)

mq



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT