All AGI projects are scary

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri Jun 04 2004 - 11:16:55 MDT


Ben Goertzel wrote:
>
> However, an AGI project that is :
>
> -- conducted and directed by people with extremely arrogant,
> overconfident and immature attitudes
>
> -- specifically oriented toward depriving people of free choice in favor
> of some other value (in your case, estimated "volition")
>
> is significantly more scary than the average AGI project. That was my
> main point.

Oh, give me a break. All AGI projects are scary. Period. Anyone who does
not realize this is scarier than any amount of, ahem, "overconfidence".

And as for depriving people of free choice, Ben, what exact form and
dynamic of "free choice" will you irrevocably write into place forever,
regardless of what future minds think of your decision? You write of
"Choice, Growth, and Joy" without asking whether humanity might want
something quite different if we had a thousand years to ponder the problem,
and "volition" is my shot at turning the deadly dangerous problem of the
meta-dynamics of "free choice" over to a wiser humane decision process. If
a Last Judge (or whatever) peeks at the result and "volition" doesn't work
as intended, people are seriously unhappy and so on, there's a chance to
call a halt. There are things that can go wrong with this plan, but at
least I am *planning* to leave emergency exits and ask for smarter-mind vetoes.

You speak of my overconfidence, I who propose failsafes and backup plans,
and write papers that describe possible classes of error in my systems, and
do not try to guess myself what morality humanity will want in five
thousand years. You show no sign of this *in-practice* humility. You
speak of my arrogance and overconfidence, and you show not one sign of
taking safety precautions far in advance, or considering the errors you
might have made when you make your proposals.

That's scary.

Me, I'm just a bad-boy mad scientist.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT