Re: answers I'd like, part 2

From: Adam Safron (asafron@gmail.com)
Date: Wed Nov 14 2007 - 21:18:53 MST


On Nov 14, 2007, at 10:02 PM, Stathis Papaioannou wrote:

> On 15/11/2007, Wei Dai <weidai@weidai.com> wrote:
>
>> Yes, I think that is very likely. Mental arithmetic and visual
>> perception
>> are both processes that solve well defined problems and are probably
>> isolated to one or two brain modules. In contrast, our ability to
>> generate
>> intuitions and especially our ability to reason about these questions
>> probably involve many more brain modules, and we don't even know
>> how to
>> clearly define the problems being solved.
>>
>> Also, for visual perception, you can do animal experiments. How do
>> you even
>> begin to reverse engineer human-specific neurological processes
>> before we
>> have nanotechnology and/or uploading?
>>
>> Or, to mention another problem, for mental arithmetic and visual
>> perception,
>> you can have someone do mental arithmetic or look at something
>> while you
>> watch his neurons fire. But telling someone "now come up with a new
>> insight
>> into the nature of induction" isn't likely to get you anywhere.
>
> Even if the neural activity involved in "higher" cognition is more
> complex than that involved in more basic cognition (and I think that
> is an assumption, not necessarily true), there is no reason to suppose
> that it is a different type of physical process altogether. It's all
> generic neurons, generic neurotransmitters, generic action potentials.
> Thus, if we are able to analyse and emulate a simple brain function,
> emulating the rest of the brain should just be more of the same.

This seems like a fallacy of composition. Simple brain function? All
of these phenomena are dependent upon functional relationships between
neurons. But this does not mean that we will be able to understand
more complex configurations–by "complex", I'm referring to difficulty
of understanding and not necessarily structural/functional complexity–
just because we understand simpler configurations. Neuroscientists
have detailed mechanistic explanations of basic perceptual processes.
They have had nowhere near this kind of success when it comes to
things like "executive functions". It may have to do with the fact
that the nature of information processing is more idiosyncratic (self-
organizing in a complex way) in the frontal lobe of the brain. Bottom
up perceptual processes are topographic and map the external world in
a fairly tractable manner. Consequently, we have fairly detailed
models going down to the neuronal level. We don't have this for
higher-order cognition.

We could emulate the human brain by modeling the activity of different
neural regions, but this would be an extremely limited form of reverse
engineering. Emulation isn't understanding. Ideally, we would like
detailed understanding of the engineering principles underlying
cognition. Without this, we will be limited in our abilities to
anticipate the emergent properties of the emulated brains. If you
achieve a super-intelligence using this sort of method (the ethics of
which are questionable), I don't see how we will be able to ensure
benevolence (which is important if you're a non super-intelligence).

-adam

>
>
>
>
>
> --
> Stathis Papaioannou



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT