**From:** Ben Goertzel (*ben@goertzel.org*)

**Date:** Mon Oct 06 2003 - 07:28:30 MDT

**Next message:**Ben Goertzel: "RE: Friendliness and blank-slate goal bootstrap"**Previous message:**Metaqualia: "Re: Friendliness and blank-slate goal bootstrap"**In reply to:**Metaqualia: "Feasibility: 100% Bayesian systems"**Next in thread:**Metaqualia: "Recruiting?"**Reply:**Metaqualia: "Recruiting?"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Curzio,

Yes, the AI design you suggest would be prohibitively computationally

complex, -- in a world, intractable, now or in a hundred years.

Think about the inclusion-exclusion theorem from set theory, which is a

necessary part of elementary probability. To compute conditional

probabilities involving combinations of n items, it requires estimation of

the probabilities of 2^n combinations. To avoid this, one makes heuristic

approximations....

For another way to look at it, look at Marcus Hutter's recent work on the

AIXItl artificial intelligence system, which uses ideas from statistical

decision theory (inclusive of Bayes' Theorem). Provably intelligent, but

computationally totally intractable.

As someone else said in their reply to you: more than coding tricks and

compression are needed, clever approximative heuristics are needed, and the

system for managing, balancing, tuning and adapting these clever

approximative heuristics is called a mind.

My own Novamente AI design relies heavily on probability theory, but

deployed in a certain way, within a particular overarching framework ... and

we're well aware of the limitations of this approach, which essentially

arise from the inability to fully expand the above-mentioned

inclusion-exclusion formula in most contexts, and the ensuing need to make

(implicit or explicit) heuristic independence assumptions.

-- Ben Goertzel

-----Original Message-----

From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Metaqualia

Sent: Monday, October 06, 2003 12:36 AM

To: sl4@sl4.org

Subject: Feasibility: 100% Bayesian systems

Would the set of all bayesian probability data necessary to replicate a

100IQ human be prohibitively large and therefore impossible to store on a

modern supercomputer? (exclude the visual cortex, just think common sense

reorganized in a bayesian fashion)

Would the lookup time for a moderately complex thought be too long in such

system? (imagine that all probabilities are stored on a hard disk).

Programming tricks, compression, anything goes to reduce the size - but

after nothing more can be done and it's just a matter of rough storage space

and speed, are 100% bayesian human equivalent AIs theoretically possible or

impossible to implement with present off the shelves technology?

Is there any use for alternatives to bayes a cognition paradigms?

curzio

**Next message:**Ben Goertzel: "RE: Friendliness and blank-slate goal bootstrap"**Previous message:**Metaqualia: "Re: Friendliness and blank-slate goal bootstrap"**In reply to:**Metaqualia: "Feasibility: 100% Bayesian systems"**Next in thread:**Metaqualia: "Recruiting?"**Reply:**Metaqualia: "Recruiting?"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:42 MDT
*