RE: Floppy take-off

From: Ben Goertzel (ben@webmind.com)
Date: Mon Jul 30 2001 - 14:54:21 MDT


> Surely, once we have a single (even slow) AI with human general cognitive
> ability (even mostly blind and quadriplegic), super-human ability will
> mainly come down to something like:
>
> * Building/ getting enough hardware (possibly specialized) to have a much
> more powerful version, and/ or many of these 'seeds'
>
> * Allowing them to learn specific skills & knowledge of computer science,
> and to apply this to improving their own software/hardware design
> incrementally
>
> * Repeat.

Look, most human beings aren't able to program computers effectively, let
alone improve advanced AI programs! There is a big gap between human-level
intelligence and being a master of AI and computer science with intelligent
self-modification ability.

Yes, this gap could be crossed by having the system read computer science
papers (although note, human-level intelligence doesn't necessarily imply a
perfect ability to read human documents and understand them), and having it
be taught computer science by human experts. But I think it will be crossed
by a combination of teaching and ongoing refinement of the inner workings of
the AI system by human programmers.

> Provided you have *really* achieved the essence of human general
> intelligence, the take-off will be Hard & Fast.

The average gas station attendant has the essence of human general
intelligence.

I agree that getting from here to human-level general intelligence is a much
bigger step than getting from human-level general intelligence to the hard
takeoff. But I still think that the latter step may be tougher than you
guys realize.

I think that to achieve human-level general intelligence, you only need to
learn to modify fairly small schema in a goal-oriented way; the large schema
that run a general intelligence adapt over time in a more involuntary way.
Whereas goal-oriented self-modification requires learning and modification
of large schema in an intentional way, which is a hard problem, not
something that humans need to do ever, since we do not frequently edit and
then recompile our brains.

Perhaps you guys are like mountain climbers standing on the foothills
looking up, and figuring when you get 90% of the way to the peak, the last
10% will be trivial. But maybe the last 10% is the place where it gets
really steep and you have to use your pitons and oxygen tanks ;) My sense
that this may be the case comes mostly from our practical and theoretical
work with schema learning... the toughest problem in the pantheon of
subproblems of cognition, a problem that human minds solve effectively only
in special cases and guided by special-case heuristics, a problem that
intelligent goal-oriented self-modification requires to be solved with much
more generality and scalability...

I'll be curious to see if Eli's views on this change after he's spent more
time calculating and prototyping for his own approach to "mental procedure
learning"

-- Ben G

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT