From: Ben Goertzel (ben@goertzel.org)
Date: Fri Oct 01 2004 - 14:18:12 MDT
Michael,
I'll just respond to a couple scattered points in this long exchange; not
because they're the only interesting ones but because I'm highly busy
lately...
> That's the other siren song, at least for us implementers; the desire to
> try it and see. However impatience is no excuse for unnecessarily
> endangering extant billions and subjunctive quadrillions of lives;
> Goertzel-style (attempted) experimental recklessness remains unforgivable.
It gets redundant to keep clarifying my position on this list, but I feel
obliged to do so at least briefly in case there are newbies to the list who
haven't heard all this before.
My position is that I have much less faith than Eliezer or you in the power
of philosophical, semi-rigorous thinking to clarify issues regarding
advanced AI morality (or the dangers versus benefits of advanced biotech,
nanotech, etc.). Even mathematical reasoning -- and we're verrrry far from
any kind of rigorous math understanding of anything to do with AI
morality -- is only as good as the axioms you put into it, and we never
quite know for sure if our axioms agree with the experienced world until we
do practical experimentation in the domain of interest...
My skepticism about the solidity of this kind of philosophical thinking
seems to be borne out by the history of Eliezer's thinking so far -- each
year he argues quite vehemently and convincingly for his current
perspective; then, a year later, he's on to a different perspective.... I
don't think he's wrong to change his views as he learns and grows, but I do
think he's wrong to think any kind of near-definite conclusion about
Friendly AI is going to be arrived at without significant empirical
experimentation with serious AGI's... Until then, opinions will shift and
grow and retreat, as in any data-poor area of inquiry.
Look at dynamical systems theory. The theorists came with a lot of
interesting ideas -- but until computers came along and let us experiment
with a bunch of dynamical systems, all the theorists missed SO MANY THINGS
that seem obvious to us now in the light of experiment ("chaos theory" for
one). Not that the theorists were totally wrong, just that their views were
limited, and then the experimental work opened up their minds to new
realities and new avenues for theory.
In short, I really don't think we're going to have a decent understanding of
the morality of advanced AI's until we've experimented with some AGI systems
acting in various environments with various goal systems. You can call it
"reckless" to advocate this sort of experimentation -- but then, I'd argue
that it's reckless NOT to proceed in the most rapid and sensible way toward
AI because the alternative is pretty likely the destruction of the human
race via other nascent technologies.
Based on this philosophy I am proceeding -- as fast as possible given the
limitations on my funding for the project and my own time and my need to
earn a living -- to complete the Novamente AI system which I believe will
result in an AGI worth of the label, one which can be used to experiment
with AGI morality among many other things.
I'm not looking to repeat prior arguments with Eliezer and you; my point in
writing these paragraphs is just to clarify my position in the light of your
somewhat deprecating allusion to my work in your message.
> > The *explanation* of 'wetness' does NOT NEED to be *derived* from
> > physics.
>
> Yes, it does. If wetness wasn't physically implementable, it would not
> exist.
The limitations of this point of view should be obvious to anyone who's
studied Buddhist psychology or Western phenomenology ;-)
You say that only the physical world "exists" -- but what is this physical
world? It's an abstraction that you learned about through conversations and
textbooks. I.e., it's a structure in your mind that you built out of
simpler structures in your mind -- simpler structures corresponding to
sensations coming in through sights, sounds and other sensations.
You can take a point of view that the physical world is primary and your
mind is just a consequence of the physical domain. Or you can take a point
of view that mind is primary and the very concept of the physical is
something you learn and build based on various mental experiences. Neither
approach is more correct than the other; each approach seems to be useful
for different purposes. I have found that a simultaneous awareness of both
views is useful in the context of AI design.
-- Ben Goertzel
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT