Re: The dangers of genuine ignorance (was: Volitional Morality and Action Judgement)

From: Eliezer Yudkowsky (
Date: Wed May 26 2004 - 19:18:58 MDT

Ben Goertzel wrote:

> Eliezer,
> My main point in this dialogue was: I don't believe I'm so ignorant or
> such a dipshit or so hide-bound by my preconceptions that, if you
> articulated your current theories on FAI and related subjects, I'd be
> unable to understand them. I've understood a lot of opaque and subtle
> things from a lot of scientific disciplines. I don't believe that your
> insights are an order of magnitude more difficult to grok than
> everything else in science, math, philosophy, etc.

I didn't say you were stupid, Ben, so it is no use objecting that you are
smart. There are many reasons for failing to see dangers that are obvious
in hindsight besides being a hide-bound dipshit. Enthusiasm and comforting
ignorance are two classic causes, and my past self fell victim to both. If
there's anyone left to judge, the future judgment of modern-day AI
researchers might be something like: "They might have destroyed the world
and never dreamed of harming a child." Or not. It sounds like a plausible
story, after all, but so what? There are other Everett branches than
these. Maybe there are some sins deadly enough that future humanity will
not forgive.

I didn't say my insights were hard to grok, Ben, but neither, it seems, are
they so trivial as to be explained without a week of work. I say something
that I see immediately, and you say no. Past experience shows that if you
and I both have the time to spend a week arguing about the subject, there's
a significant chance I can make my point clear, if my point is accessible
in one inferential step from knowledge we already share. The case of AIXI
comes to mind; you made a mistake that seemed straightforward to me because
I'd extensively analyzed the problem from multiple directions. And no, my
insight was not too subtle for you to comprehend. But it took a week, and
the clock went on ticking during that time.

> Next, to respond briefly to a few other peripheral points from your
> message...
> 1)
> I'm quite knowledgeable of probability theory, including Bayes rule and
> its accompanying apparatus, so if I make errors in judgment about FAI or
> related topics, it's not because of ignorance of probabilistic
> mathematics. I used to teach that sorta math in the university, back in
> the olden days. And I've done a lot of work with probabilistic
> inference lately, in the Novamente context.

When I came to Novamente, I didn't succeed in explaining to anyone how
"curiosity" didn't need to be an independent drive because it was directly
emergent from information values in expected utility combined with Bayesian
probability. Maybe you've grown stronger since then. I know I've learned
a hell of a lot myself, in those years. I even call things by their proper
names these days, rather than throwing around raw math with ill-remembered
or reinvented names. But as far as I can tell, you've never understood
anything of Friendly AI theory except that it involves expected utility and
a central utility function, which in the past you said you disagreed with.
  I still haven't managed to make you see the point of "external reference
semantics" as described in CFAI, which I consider the Pons Asinorum of
Friendly AI; the first utility system with nontrivial function, with the
intent in CFAI being to describe an elegant way to repair programmer errors
in describing morality. It's not that I haven't managed to make you agree,
Ben, it's that you still haven't seen the *point*, the thing the system as
described is supposed to *do*, and why it's different from existing proposals.

I don't say why. It seems to me that CFAI sucks as a teaching document.
If you want to blame the whole thing on me, fine. But don't make it into
an ungrounded boast on my part that my ideas are too subtle for you to
comprehend; for I've seen myself fail as a speaker.

But yes, as I've said before and I'll say again, Friendly AI *IS* frickin'
subtle, and no one should expect otherwise.

> 3)
> About recognizing, in hindsight, the stupidity of alchemy: yes, of
> course, it's relatively easy to avoid making mistakes of the same type
> that were made in the past (though humans as a whole are not so good at
> even this!). What's much harder is to avoid making *new* types of
> mistakes. The universe is remarkably good at generating new kinds of
> mistakes to make fools out of us ;-)

Doesn't excuse every new generation of scientists making the same mistakes
over, and over, and over again. Imagine my chagrin when I realized that
consciousness was going to have an explanation in ordinary, mundane,
non-mysterious physics, just like the LAST THOUSAND FRICKIN' MYSTERIES the
human species had encountered.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT