From: Ben Goertzel (ben@goertzel.org)
Date: Mon May 24 2004 - 09:11:04 MDT
Hi,
> Ben Goertzel wrote:
> >
> > We've had this discussion before, but I can't help pointing out once
> > more: We do NOT know enough about self-modifying AI systems
> to estimate
> > accurately that there's a "zero chance of accidental success" in
> > building an FAI. Do you have a new proof of this that
> you'd like to
> > share? Or just the old hand-wavy attempts at arguments? ;-)
>
> Ben? Put yourself in my shoes for a moment and ask yourself
> the question:
> "How do I prove to a medieval alchemist that there is no
> way to concoct
> an immortality serum by mixing random chemicals together?"
Pardon my skepticism, but I don't believe that the comparison of
A) your depth of knowledge about FAI, compared to mine
with
B) modern chemical, physical and biological science, versus the medieval
state of knowledge about these things
is a good one.
Frankly, this strikes me as a level of egomania even beyond the very
high level that you normally demonstrate ;-)
I can well believe that you have some insight into FAI beyond what I
have, and beyond what you or anyone else has put in their published
writings. But you're just one human, working for a few years, and I
don't believe you've erected a secret edifice of knowledge even vaguely
comparable in magnitude to the sum total of science over the last 500
years or so. Sorry.
Between the medieval world-view and the modern scientific world-view
there are so many differences, you have an extreme case of what
Feyerabend called "incommensurability." The medievals spoke different
languages than us, in many senses. I don't believe that you have
advanced so far beyond me and the other mortals that your understanding
is that profoundly incommensurable with ours.
However, I do think it's possible that you have a theory of the
"probabilistic attractor structures" that self-modifying cognitive
systems are likely to fall into. Such a theory could potentially lead
to the conclusion that accidental FAI is close to impossible. If you
have such a theory, I'd be very curious to hear it, of course. I have
worked out fragments of such a theory myself but they are not ready to
be shared.
Next, a note on terminology. When you said "it's impossible to create a
FAI by accident" I saw there were two possible interpretations
1) it's impossible to create an FAI without a well-worked-out theory of
AI Friendliness, just by making a decently-designed AGI and teaching it
2) it's impossible to create an FAI without trying at all, e.g. by
closing one's eyes and randomly typing code into the C compiler
Of course, 2 is almost true, just like a monkey typing Shakespeare is
extremely unlikely. Since this interpretation of your statement is very
uninteresting, I assumed you meant something like 1. My statement is
that, so far as I know, it's reasonably likely that building a
decently-designed AGI and teaching it to be nice will lead to FAI. This
is not just based on applying the Principle of Indifference, it's based
on a lot of other considerations as well, which we've discussed many
times.
Of course the phrase "decently-designed AGI" is not well-defined, an
example of what I'm thinking of is something like Novamente in a
mind-simulator configuration, as I've discussed in previous emails and
documents.
Because there is so much uncertainty (even though in my view there's a
much-greater-than-zero chance of success), I wouldn't advocate
proceeding to create a superhuman-level self-modifying AGI without a
better understanding -- unless the threat of imminent destruction of the
human race from some other source seemed more severe than it does now.
Yours,
Ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT