From: Chris Capel (firstname.lastname@example.org)
Date: Fri Dec 16 2005 - 14:49:02 MST
On 12/15/05, micah glasser <email@example.com> wrote:
> If materialistic reductionism is the correct model of reality (which is
> very problematic) then the concept of emergence must mean that the system in
> question is merely not fully understood. This conclusion, however, is
I think the main, and common, complaint against emergence around here
is that it's a word that doesn't explain anything. It can be accurate,
but is usually largely vacuuous.
So what's questionable about the conclusion? What conclusion?
> From this functionalism perspective one
> would say that humans are based on an encoded set of instructions (the form
> of those instructions is irrelevant) and I would say that if AGI is to be
> built it will be based on an engineered set of instructions. So based on
> this reasoning I am merely arguing that consciousness COULD have a
> functional utility that would imply that any sufficiently intelligent system
> might be consciousness. This possibility may turn out to be false but there
> is no fallacy contained in the hypothesis.
My main complain with your argument is that you seem to be reading
more into "consciousness", as in "self awareness", than is warranted
by your hypotheticals. While the AI we can call Bob is probably
necessarily aware of Bob's existence if Bob has attained a certain
level of ability, Bob's internal modeling of Bob will probably bear
little resemblance to a human's consciousness or self awareness. You
have a lot more work to do if you want to draw any comparisons. Your
respondents are probably preemptively warning you away from this
tendency to anthropomorphize, which you've sort of treaded a little.
-- "What is it like to be a bat? What is it like to bat a bee? What is it like to be a bee being batted? What is it like to be a batted bee?" -- The Mind's I (Hofstadter, Dennet)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT