From: Lee Corbin (lcorbin@rawbw.com)
Date: Fri Jul 04 2008 - 14:59:54 MDT
Charles writes
> [Lee wrote]
>
>> The thing hardly qualifies as an AI if it doesn't have the ontology
>> of a three year old. And if an AI can understand what trees, cars,
>> tablespoons, and about twenty thousand other items are---that is,
>> can reliably classify them from either sight (pictures) or feel---then
>> it's going to know what a human being is, though (just as with
>> any of us) it will be undecided about some borderline cases.
>
> One doesn't start out knowing about the world. One learns about it. A three
> year old may have ontologies and know about object persistence, but a new
> born doesn't, and presumably you aren't doing basic restructuring after it
> becomes intelligent (i.e., when you're through writing it), only before hand.
Babies are really complicated. We have to be careful not to
extrapolate too much. But the baby brain is still growing
very quickly, and for a long time. It could be that it acquires
its ontological beliefs *as* it's getting smarter.
> So that nascent AI has the capability to learn about cars, trees, people,
> etc., but that knowledge isn't built in. Therefore the goals that are
> built-in need to be created without reference to such things...except to the
> extend that you rely on "imprinting" (which can produce results that are a
> bit iffy).
Here, I don't know what the general received wisdom on this list
is about this issue, and I'll gladly defer to it. But I *thought* that
there really isn't so much to the idea of high intelligence in the
complete absence of knowledge (and I do mean *complete*).
Okay, I may have to concede (but am not doing so yet) that
a nascent AI may know nothing but has a fantastic ability to
learn, and so by that measure is already intelligent.
But the way *I* use the term, it's not intelligent yet, and it certainly
is not the subject of our "AI domination" scenarios, as Bryan likes
to call them. The only AIs we have to worry about are those who
have learned enough about the world to be dangerous.
> This makes creating a set of goals that will result in an AI
> that's both friendly and useful a bit tricky...and I say this without even
> having tried it once.
What? You haven't created any AI's yet? Funny, I thought
everyone here had at least a couple of successfully completed
superhuman intelligences to his credit.
> (Partially because the exact form that such goals would take
> are highly dependent upon the exact internal structure of the
> mind of the nascent AI.)
My most likely guess---again I defer to the old hands here and
invite even their one-line opinions---is that a program, or an
evolving set of AI programs, rather---begins by being extremely
"stupid", and only gradually over time evolve high intelligence.
This is done---and to me, mostly likely---has to be done
concurrently with knowledge acquisition.
Lee
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT