# Re: State of the SI's AI and FAI research

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Feb 15 2005 - 11:12:45 MST

Slawomir Paliwoda wrote:
> I'm curious about the amount of theoretical progress SI has made since CFAI
> and LOGI in the areas of FAI and AI research.

How do you measure theoretical progress? In LOGI there are only a few
passing mentions of Bayes or probability theory, which today seems utterly
alien to me. In "A Technical Explanation of Technical Explanation", the
most recent work I published, there is no explicit discussion of LOGI or
CFAI; but I think that if one read TechExp and understood it, and then read
LOGI or CFAI, the one would see that LOGI and CFAI cannot possibly be
enough because they are not technical models - and that is progress. There
are those who will say, "But, LOGI is not technical!", though they cannot
give a technical definition of what they mean by the criticism. TechExp
gives a mathematical definition of why the criticism is correct, and that
is progress. It says something about the form a final theory needs to take.

My thinking has changed dramatically. I just don't know how to measure
that. Progress is not the same as visible progress. Now that I have a
better idea of what it means to understand something, I also understand a
little better what it means to "explain" something, and it's clear that my
earlier explanations failed - people did not apply or build on the
knowledge. So now I try to explain simpler things at greater length, for
that it is better to understand just one thing than to be confused by two
dozen. But the flip side is that my progress past LOGI is something that I
things on the order of Bayesian probability, expected utility, and the
character of mathematical logic, and of these things I have so far only
tried to explain my thoughts about Bayesian probability. So there is
progress but it is not easily visible progress. If you look at my recent
works they have a different character than my earlier works and in some
cases I have given mathematical explanations of what's wrong with my
earlier works. That's visible progress.

> How far is SI currently from
> the point at which its programmers can begin writing code?

How on Earth am I supposed to know this? I could make up a number. Where
would it come from? I hope I am not to be penalized if, unlike other
futurists, I know better than to make stuff up.

> And, generally,
> what else besides completing theoretical framework(s) needs to happen
> before SI is ready to launch its project?

1) Framework
2) People
3) Funding

```--
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
```

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT