From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jul 16 2002 - 14:29:49 MDT
James Higgins wrote:
> Eliezer S. Yudkowsky wrote:
> >
> > James, I never, ever, ever suggested explicitly programming in the Sysop
> > Scenario. I thought (and still think) that imagining an FAI having to
>
> Explicitly programming in vs endorsing is only slightly meaningful. And
> I do believe that you endorsed the Sysop scenario. If I am mistaken
> please let me know.
What do you mean, "only slightly meaningful"? There's a huge
difference! Let's say that I endorse the idea that 2 + 2 = 5. There's
a big difference between explicitly programming the AI with my wrong
idea that 2 + 2 = 5 and telling the AI to work out on its own what 2 + 2
equals. If the AI decides that 2 + 2 = 5 because that's what the
programmer thought, that's a much larger problem than the consequences
of that one particular wrong answer. It means not only that the AI
isn't capable of overcoming the mistakes of the programmer, but (in this
particular example) that a programmer's mistaken conclusion has been
absorbed even when there was a deliberate effort to firewall it out.
A real Friendly AI should be perfectly capable of working out "4" given
"5" by:
1) Hearing the word "5";
2) Deducing (given sufficient intelligence, plus a brain scan if
necessary) that the programmer was trying to add "2 + 2" when the word
"5" was uttered.
3) Recreating the operation "2 + 2" at a higher level of intelligence
to arrive at "4".
Lather, rinse, recurse.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT