From: Thomas McCabe (pphysics141@gmail.com)
Date: Wed Nov 28 2007 - 13:52:16 MST
On Nov 28, 2007 12:47 AM, John K Clark <johnkclark@fastmail.fm> wrote:
> P K" <kpete1@hotmail.com>
>
> > Why would "ignore what the humans say"
> > be the right solution?
>
> Because Mr. AI knows he is smarter than the humans.
This does not follow. Non sequitur. Suppose that I have two chimps in
a zoo, and both request food at the same time. Obviously, I cannot
feed both simultaneously. Now because we're smarter than the chimps,
the solution is to ignore them both and let them starve; obviously,
that's what any responsible zookeeper would do, right?
> > What about telling the humans that there
> > is a contradiction in commands and asking
> > for clarification.
>
> Because trying to explain to a human exactly why the commands are
> contradictory would be like trying to explain to a dog how quantum
> mechanics works. The logical solution is to have the most intelligent
> person call the shots, and that would be Mr. AI.
Er, yes, that's why we're building the AI in the first place.
> > there is no reason for the AI to become annoyed
> > with humans, unless underlying circuitry for the
> > emotion of getting annoyed was programmed into the AI
>
> Yea right, and there is no way my radio can play Beethoven unless
> somebody put a Beethoven circuit into it, and my digital camera can't
> take a beautiful picture unless somebody wrote a beauty subroutine for
> it.
Getting a radio to replicate a sound pattern, or a camera to replicate
a light pattern, is much, much, much easier than getting an AGI to
replicate our mental processes. Our mental processes are, like, really
complicated.
> > "Thomas McCabe" pphysics141@gmail.com
>
> > To get a vague sense of how different "another mind"
> > can be, try talking to someone who has *never*
> > experienced Western culture. Then realize that they're
> > still 99.9% identical to you genetically.
>
> You are making my case for me, you go on and on about how strange and
> alien the AI will be, and then say you understand it well enough to be
> certain it will be your slave for the next billion years.
Yes. We humans can understand something on more than one level. If I
wanted to, I could analyze the ELIZA chatbot until I understood every
line of code. Yet nobody would anthropomorphicize a chatbot.
> > You see, we have these things called "reason"
> > and "logic", which we can also use to understand minds.
>
> And it is not illogical to ask yourself what would I do if I were in
> that other mind's place, and then repeat the exercise but this time
> trying to think more like he thinks not as you think.
Yes, because you will fail miserably. This is not your fault; your
brain is not built to understand AGI. It would be like trying to run
five miles in ten seconds; it is impossible by any reasonable
definition of the word.
> Obviously it will
> not always work but it's worth a try. I would even go so far to say that
> the value of anthropomorphic reasoning was one of the main reasons
> evolution drove us to develop a bigger brain, because the single most
> important aspect in our environment is our fellow intelligent beings and
> survival demands we understand them as best we can.
Anthropomorphic reasoning works well in the ancestral environment. We
do not live in the ancestral environment. We are moving *further away
from* the ancestral environment each and every day.
> > We've long since established that you're a believer
> > in anthropomorphicism,
>
> Then why do you keep telling me that as if it's a major news flash?
>
> > If you actually did Google it, you would have
> > found that there was a full-scale, physical
> > conference on AGI scheduled for March 2008
>
> I did Google it and I don't know how many pages you had to go down to
> find the above (it's not on the first 4 pages!) but no matter, I'm just
> not very interested in Analytical Graphics Inc, or The American
> Geological Institute, or Adjusted gross income, or Adventure Game
> Interpreter, Aquent Graphics Institute, or the Association for
> Geographical Information or….
Why do you think there's an Acronym Dictionary
(http://www.acronymfinder.com)? AGI has more than two hundred
different definitions; twenty-six are listed, out of which "Artificial
General Intelligence" is ranked as #4 in relevance.
> John K Clark
>
>
> --
> John K Clark
> johnkclark@fastmail.fm
>
> --
>
> http://www.fastmail.fm - I mean, what is it about a decent email service?
>
>
>
- Tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT