Re: How hard a Singularity?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jun 25 2002 - 16:25:44 MDT


Ben Goertzel wrote:
> Eliezer,
>
> I'm sorry if I've misinterpreted your statements over the past couple
> years...
>
> In our various interactions, you have seemed to me to display a HUGE
> confidence that
>
> a) you personally are somehow uniquely suited or even "destined" to play
> a key role in bringing the Singularity about
>
> [others have gotten this impression from you as well; I recently received
> a personal e-mail in which someone else referred to (his wording) your
> idea that you are "The One"]

Suppose I did think that I was uniquely suited to playing a key role in the
Singularity. It wouldn't necessarily make me confident of my ideas.
Humility should not be confused with modesty. Humility is a way to conduct
yourself in an interaction with Nature; modesty is a way to conduct yourself
in an interaction with others. I believe in humility but not modesty.
Humility is a form of rationality. Modesty is not. And how bright you
think you are, and how far you think your ideas have gotten at any given
point, are rationally unrelated, regardless of whether they may be linked
emotionally by the factory settings of the human mind.

Incidentally, for my actual views on this topic, see:
http://sysopmind.com/archive-sl4/0204/0118.html

> b) your approach to Friendly AI is the right way to ensure the
> Singularity comes out well

Again: There is a difference between having gotten far enough to know that
a given proposal isn't enough, and believing that your own proposal will
work. As for the Friendly AI part of it, I think that again I may have
failed to convey the question to which Friendly AI is an answer. It's not
about getting an AI to do something for you or instilling an AI with a vague
feeling of helpfulness toward humans; if that were the case there might,
indeed, be many different right ways. The question to which Friendly AI is
intended as an answer is "With the future of all humanity at stake, how do
you construct a self-improving mind so as to enter the best possible
Singularity?" We only get one Singularity and it has to be the best
Singularity possible; not almost the best, but the best.

Anyway, I still think you're confusing "You are wrong" with "I am right".
Friendly AI programmers saying "I am right" is worrisome. Friendly AI
programmers having to repeat "You are wrong" over and over again until holes
are worn through their tongues is only to be expected.

The strongest statement I would make about Friendly AI is "I've been looking
at this section of floor for two years, I have my trap detectors shoved out
to absolute maximum, and I still haven't detected any basic flaws, so at
this point, even taking into account how much is at stake, I'm ready to put
one foot down and start shifting my weight over."

This is not incompatible with plenty of repetitions of:

"Argh! Wait! Don't go there!"
"Hmph. What makes you such a better trap detector than I am? You think you
know the one true path through this labyrinth? Maybe my path is better, did
you ever think of that? Jeez, I never met anyone as arrogant as you. What
makes you so confident that YOUR route is safe?"
"Look, I'm not trying to shove in on your territory, but could you humor me
and just take one step to the right? To the RIGHT. No, that's - shoot."

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT