Re: State of the SI's AI and FAI research

From: Eliezer Yudkowsky (
Date: Wed Feb 16 2005 - 09:49:48 MST

Thomas Buckner wrote:
> BTW, Eliezer said:
> "My thinking has changed dramatically. I just
> don't know how to measure
> that. Progress is not the same as visible
> progress." Some have noted with concern that
> 'Eliezer was absolutely sure of X, then a year
> later was absolutely sure of Y not X, then a year
> later Z not X or Y'.

I was never absolutely sure, nor even close to absolutely sure. I was
raised a traditional rationalist - not a Bayesian, but still a traditional
rationalist, wise in the art passed down from Feynman and Popper and Martin
Gardner, though not the art of Jaynes and Tversky and Kahneman. I was too
traditionally humble to be absolutely certain of something (or, if I were
certain, admit my certainty to myself), for that I knew it would not be
rational to be certain. I knew better than to assign a probability of 1.0,
even without knowing *all* the probability theory and cognitive science
behind the prohibition; I knew enough. I was a *good* traditional
rationalist, as traditional rationalists go.

What I did *not* realize back then, and what the parables of traditional
rationality did not teach me, is that being modest and humble and
confessing your uncertainty isn't worth beans if you do it anyway and then
you turn out to be wrong.

So I went in search of a bigger hammer, a hammer with better discrimination
and calibration; a sword that would not break in my hand; a method that
would work, not just warn me that it might fail and then fail.

It's frightening to me to contemplate that most AGI wannabes haven't even
finished mastering traditional rationality; they reserve a part of their
worldview for a mysterious magisterium and are cheerfully absolutely sure
of the blatantly uncertain future. And in a way this is not surprising,
since the vast majority of modern-day human beings still have not mastered
the traditions of traditional rationality. But it is depressing, in
someone who would hold the world in their hands, or for one who would
aspire to be a maker of minds. I was a damned fine traditional rationalist
when I started out at age 16, though, as it turned out, too weak to
actually obtain correct answers. To actually obtain correct answers, not
just be defensible in your failure, is an extraordinarily difficult thing.

> So be it. In this age, anyone who doesn't
> radically revise their thinking every year or two
> isn't keeping up.
> Tom Buckner

Anyone who doesn't realize they might need to radically revise their
thinking, and then needs to do so, isn't calibrated.

Anyone who humbly disclaims that they might need to revise their thinking,
behaves in more or less the same way regardless, and then needs to
radically revise, makes a subtler mistake.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT