From: Mohsen Ravanbakhsh (ravanbakhsh@gmail.com)
Date: Sat Feb 24 2007 - 02:14:41 MST
My point is:
In the case of highly modular brain, where we don't have common structures
or any general pattern between modules, there's no definition of
Intelligence! you see?
Every general fact we've made up to now, becomes some superficial or
functional description, and since introspection wont help, we're left to
scrutinize the biologic architectures and so on...
The interesting fact here is that the intelligence becomes a wrong
abstraction. It has formed because of our misconception of ourselves and the
false belief in transparency of mind.
On 2/24/07, Stathis Papaioannou <stathisp@gmail.com> wrote:
>
>
>
> On 2/24/07, Mohsen Ravanbakhsh < ravanbakhsh@gmail.com> wrote:
>
> Hi everybody,
> > I'm new to this list.
> > I wanna begin with a question:
> >
> > A case of formation of human intelligence is considerable for which the
> > current trend of study of AI is not appropriate. Suppose our brain is highly
> > modular ( every single intelligent capability have been provided in a
> > module), in both structural and algorithmic aspects, and the unity we feel
> > in our cognition is some kind of illusion (our mental activities are not
> > transparent to us, but we think they are; as Churchlands propose)
> > It seems in this case our endeavor is pointless, because our intuition
> > is of no help and the only reliable source is neuroscience which is not
> > good in giving big pictures.
> >
> > I'm asking, in this case (which is quite probable in my view) what can
> > we do to construct AI?
> > (Becarefull of the 'I' in AI ! that is the vague point in this
> > situation)
> >
>
> If cognition can be reduced to modules or components interacting together,
> wouldn't that make it easier to create AI? The alternative is that there is
> some magical, irreducible soul which we can never hope to emulate. There are
> clearly illusions in cognition such as that of free will, which we
> experience because we don't know what we're going to do until we do it, and
> perhaps the binding problem (which never really struck me as problematic),
> but that just means that if we emulate the brain, we will get all the
> illusions as part of a package deal.
>
> Stathis Papaioannou
>
>
>
-- Mohsen Ravanbakhsh,
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT