From: Ben Goertzel (firstname.lastname@example.org)
Date: Tue Apr 02 2002 - 22:48:29 MST
hey there Eliezer...
> Are you sure you really mean "human-equivalent" in the paragraph
> When you say "human-equivalent" in
> your original statement, do you mean what I would call "infrahuman", i.e.,
> intelligence of roughly the same character but substantially inferior in
> terms of actual capabilities?
"I meant what I said
And I said what I meant
An elephant's faithful,
One hundred percent!"
-- Dr. Seuss
> It actually goes up from here to "superintelligence" and "Power", but who
Ummm... George W. Bush ???
Alfred E. Neumann???
You tell me...
>Anyway, I would expect first steps toward seed AI to
> become possible
> at the prehuman or infrahuman level, depending on the approach
> and the kind
> of self-improvement being attempted.
I am not sure exactly what you mean by "first steps toward seed AI." I
guess the first one-celled biological organism was a first step toward seed
AI, in a sense.... Or was it the Big Bang??
I think that human-equivalent (and yes, I mean that!) AI will be
prerequisite for significant self-modification of cognitive algorithms and
data structures to occur.
I think that this kind of self-modification will require mastery of advanced
math and computer science, which will probably be learned by an AI most
easily through communication with humans and through reading of human
research papers. And talking to humans about advanced math and CS, or
reading the Communications of the ACM, seems to me like it'll require
roughly human-equivalent intelligence.
Neither of my dogs is much good at reading the Communications of the ACM,
anyway, although they (idiotic as they are) arguably have greater general
intelligent than Deep Blue....
Sometime in the next few weeks I'll be writing a paper on the current state
of automated global program optimization. (I'm consulting for
Supercompilers LLC, a startup firm that's making a Java supercompiler.)
I'll post a link to it here, when I do. Looking at this work, concretely,
has given me a pretty concrete sense of what's going to be required to get
strong self-modification to work. I don't think an infrahuman AI is gonna
be able to do it.
Of course, my intuition could well be wrong on this point -- it's just an
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT