Minimum complexity of AI?

From: sam kayley (thedeepervoid@btinternet.com)
Date: Sun Feb 20 2005 - 21:17:37 MST


Another question unlikely to currently be usefully answerable .. how large
do you expect the smallest AGI implementation to be (e.g, in lines of
obfuscated lisp)?

To make the question somewhat well defined, assume:

Memory available around number of synapses in human brain * a small number
of bits.

Learning from sensory-motor experiences in a robot body, with teaching done
by people with minimal special knowledge of the AI's workings (any special
knowledge required must be included in the Line Count).

Total number of CPU cycles used in learning + proof of being a functioning
intelligence no greater than a reasonable estimate of equivalent that a
human brain uses in 20 years.

Goal system not entirely arbitrary (FAI Line Count is a separate question).

Defining roughly human level or greater general intelligence exactly and
concisely isn't easy, so I will give some examples as cold water for any
cheating philosophers on the list: Learning to play Go to a reasonable
standard given an english definition of the rules and/or a partner to play
against, arguing whether it is sentient, designing a bicycle given an
english description of requirements, driving a car in realistically
uncontrolled situations. Challenge task: creating a kit that replaces a
persons appendages with blue tentacles with an awkward tendency to curl into
the shape of paperclips when not in use, and deciding whether this should be
applied to Michael Jackson.

Perhaps the compactness and subtlety of the program that generates the
mandelbrot set, or some of the IOCCC entries is an appropriate comparison ..
length of program + sufficient documentation/comments to be comprehensible
is a different question, as is the length of the shortest AGI program that
will be written before the science of mind is mature.



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:53 MST