From: Michael Roy Ames (email@example.com)
Date: Wed Jan 01 2003 - 11:08:52 MST
> A question is: Can you conceive a set of tests with sufficient breadth
> if an AGI system could pass them, it would be pretty damn clear the
> possessed a high degree of general intelligence.
The short answer is: yes.
A slightly longer answer is:
I am reasonably confident I could formulate tests of AGI intelligence up
to a significant percentage of my own intelligence (say 80%). These
tests could be Games set in any of the microdomains that I have
described in the Curriculum as long as there were no limits on the
number of bits allowed within the microdomain.
I currently do not see any good reason to create such tests... but
perhaps I'm being myopic. I want an AGI that can obtain a good
understanding of what human beings are. I want it to be able to relate
to us, and 'take on our cause' - that is Friendliness. In order to do
this I think that it is essential for the AGI to have interfaces with
the real-world in real-time... maybe not at first, but as early as
possible. Let me give you an example why...
Currently I know of you, Ben Goertzel, from email lists. Also I have
read a number of your books and papers. We work on Novamente, and share
personal emails. But I have never met you 'in the flesh'. And I
hesitate to say that I *know* you until I have spent time pal'ing
around, eating, jamming, arguing. Without the richer interface of full
human senses in close proximity to the target data (you) my knowledge is
sharply limited. I can only imagine that this will be *exactly the same
for an AGI*. I am not being anthropomorphic, rather I am pointing to
the difference in breadth and volume of data between communicating
across the net and communicating face to face.
It is certainly possible for me to reformat multiple types of sensorial
data and deliver it to any of the microdomains (allowing for unlimited
bits). However this would be just raw data with *no preprocessing*.
And pre-processing *counts* as part of intelligence. Human beings have
tons of pre-processing. Are we expecting the AGI to creatively develop
its own pre-processing from scratch? I don't think that is going to
happen. We will have to give it some help. I see that help coming in
two ways: 1) canned pre-processing routines that 'come with' a new
sensory input and 2) learned routines that we teach ver via a simpler
interface. I could expand the games in any of the microdomains to
present new routines... but the AGI would have to be able to take those
'presentations' and try them out for verself. I do not plan to create
new pre-processing routines myself, just provide incremental lessons to
teach an AI how to follow any routine.
So, to return to the specific point about 'understanding human
beings'... An AGI without the full suite of senses currently apportioned
to humans is going to be missing important things. Some of those
things, I cannot even articulate, but I know 'em when I see 'em. A
Friendly AGI is going to have to model humans quite closely... the
closer the better. Currently we don't understand human cognition well
enough to explain it all to an AGI in advance... but we can at least
give it a good approximation of human experiential input and have it
draw its own conclusions. Without this input, I don't believe an AGI is
going to have a very good model of us. Should we not give ver as much
help as we can, as early as we can, to get it right?
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT