From: David Clark (firstname.lastname@example.org)
Date: Sun Jan 23 2005 - 10:23:41 MST
I agree with Ben's position on friendliness even though I don't agree with
his agent approach to AGI. The chance that *any* implementation of AI will
*take off* in the near future is absolutely zero. We haven't the foggiest
clue exactly what it will take to make a human level AI, let alone an AI
capable of doing serious harm to the human race. What we need is some
"hands on" working prototypes so that we can see were success lies and where
it doesn't. In the worst case scenario, our current limited hardware will
constrain the AI to a much less than human intelligence in any case.
Writing software is like writing a book, at some point you just have to
start writing. Worrying about the game 10 steps out in front is fine but in
real world software, it is not very useful. Most of the time, to program
step 2, you have to have the detail information from step 1 etc. Worrying
about whatif's 20 steps down the road won't move the goal of AGI forward.
We just don't know enough about what it takes to make intelligence using
conventional computers. All the study of human brains and how we think is
fine, but will that help us to program a *real* AI in the *real* world? If
it does, show me the prototype. If you have a great idea for a working AI
then just show me. Talk is cheap. Kudos for Ben for at least having some
I thought the points made by Harvey Newstrom about taking an engineering
approach was very good, however, engineering works best when you know your
tools and your requirements. How do you use an engineering approach when
you don't know what tools to use, what exactly the problem really is, or
even what approach to take to solve the problem? With an AGI, *everything*
is an unknown. Engineering works with things that are known.
I don't mean to sound so harsh to Eliezer's philosophical approach but
designs and philosophy are admirable goals but if we are talking about a
software system that works in the real world, then what you see is what
matters. Even if most of the plan was on paper but *something* was in
software, my complaint would be retracted. I also haven't started to
actually write my AI code so this same criticism also applies to me.
Even though I don't think Marc's rantings are worth responding to, I was
*very* impressed by the logical critique by Harvey Newstrom. If only the
world could reason so clearly.
-- David Clark
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT