From: Eliezer S. Yudkowsky (
Date: Sun Apr 07 2002 - 09:14:31 MDT

Carlo Wood wrote:
> From your remark, especially the one where you say that you might
> not have time to reply, I conclude that you are not interested
> in hearing my ideas.

True. There are a *lot* of ideas about AI, and I don't have time to listen
to all of them. You have a very short time to catch someone's attention if
you want to talk to them about a new approach to AI - a principle that I
follow in my own work as well, of course. Please understand that there is a
failed-classical-AI generation schema embedded in the minds of most
programmers; programmers are trained to think through how to solve a problem
and translate these thoughts into code, and this is sometimes confusion
between this and the task of building a system that can give rise to
thoughts on its own. I tend to filter out those ideas very quickly.

I suspect there is true of most other AI researchers as well. I'm just more
willing to run the risk of offending someone in order to explain it openly.

> You've already made clear that you don't believe in making an effort
> together with volunteers on the other side of the World, so - unless
> you convince me now otherwise - my decision has been made that it
> won't be us who will cooperate and I will unsubscribe and continue
> my conversation in private with Ben.

Good luck, but while I can't speak for Ben, my advice to you would be not to
take his attention for granted - he's doing you a favor if he listens to
your ideas, you're not doing him a favor by sharing them. Same rule as for
writer submitting their stories to editors, or for that matter writers
sharing their stories with an audience. My AI ideas are, in my humble
opinion, brilliant, but everyone (including complete amateurs) who takes the
time to read one of my pages is still doing me a favor, not the other way
around. Even so I don't always have time to return the favor, unless
something really catches my attention.

-- -- -- -- --
Eliezer S. Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT