Re: Fuzzy vs Probability

From: David Clark (
Date: Sat Jan 15 2005 - 08:48:03 MST

----- Original Message -----
From: "Stephen Tattum" <>
To: <>
Sent: Saturday, January 15, 2005 4:20 AM
Subject: Fuzzy vs Probability

> I couldn't help noticing also that generally there are gaps in the
> plan. As a philosopher I saw the ommission of any philosophy of mind -
> crucial to any AI discussions and for any 'deep understanding' of the
> issues actually outlined - strange... I have witnessed in the past
> prejudice against philosophy and philosophers here too (apology already
> accepted of course) and I wondered if the project of creating AI is
> being pushed forward before it is ready. Now I believe that the
> singularity is inevitable and I am not suggesting that the institute is
> wrong, just that creating an Artificial General Intelligence, needs more
> emphasis on the general. Any thoughts?

Why does AI or any other intelligence have to mimic humans? If an AI were
being made out of cells and biology like the human brain then it might be
reasonable to design the AI structures in the same way, but that is not the
case. Take as one example the number of variables we can hold in our minds
at one time. Maybe 6, 10 or even 20. A silicon based computer can hold any
number of variables (1,000 100,000) in *focus* at one time. Human memory is
very inexact. Computer memory is perfect. Human brains are hugely parallel
and computers are hugely serial. Even if most humans don't use Bayesian
reasoning, would they do so if they were smart enough? Bayesian reasoning
should stand or fall based on what it can give an AI, not on whether humans
use it much or not. I see many faults in the way human's think, should I
design an AI that has the same obvious faults?

If Eliezer's discussions are not considered 'deep understanding' or at the
very least detailed explanations then you haven't been reading SL4 or his
documents very carefully.

-- David Clark

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT