From: Peter Voss (peter@optimal.org)
Date: Mon Jul 30 2001 - 09:49:18 MDT
Just like Brian, I'm also surprised at Ben's claim that human programmers
will take a significant, ongoing part in enhancing human-level AI to
super-human ability - and that this may take decades. Any system with these
characteristics is missing some crucial aspect of general intelligence. I
suspect that this will be the case with AI systems that concentrate on
*sepearately* coding high-level reasoning, instead of deriving it from
environmentally-interactive perception/ action & concept formation.
Surely, once we have a single (even slow) AI with human general cognitive
ability (even mostly blind and quadriplegic), super-human ability will
mainly come down to something like:
* Building/ getting enough hardware (possibly specialized) to have a much
more powerful version, and/ or many of these 'seeds'
* Allowing them to learn specific skills & knowledge of computer science,
and to apply this to improving their own software/hardware design
incrementally
* Repeat.
This scenario does not even into account any specialized software/
computer-language skills that may be more deeply embedded (as per Eli's
design), nor the many other advantages of artificial, designed systems
(http://www.optimal.org/peter/hyperintelligence.htm )
Provided you have *really* achieved the essence of human general
intelligence, the take-off will be Hard & Fast.
www.optimal.org - Any and all feedback welcome: peter@optimal.org
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT