From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jul 06 2002 - 15:11:35 MDT
Mike & Donna Deering wrote:
>
> But you might include the fact that the longer you delay, the more
> people die. Or if you have any ideas about how to test an AI. But I
> would think that any test a Friendly AI could pass could be passed by an
> equally Unfriendly AI. After all you can't just say "what would you
> do..." and UAI could lie. I would assume it to be as difficult to
> determine the status of an AI as it would be to determine the status of
> a human. Or for that matter the status of an AI programmer. How do we
> know that Eliezer isn't trying to take over the world for his own purposes?
If anyone wishes to administer a lie-detector test on this, such as lie
detection technology is in modern times, I'll take it. If anyone comes
up with a better lie detector test before the Singularity I'll take that
as well. I am not an AI; if I were lying you would have a reasonable
expectation of catching it using your native human abilities, and the
fact that you have not is evidence, though not proof. I currently
visualize, although I am not completely sure and can make no promises,
that the grounding of the Friendly AI structure will be such that the
other programmers on the project can verify that, despite being
sensitive to the motives of the programmers, the AI will have been told
to resist hidden selfish motives.
I don't think selfish AI programmers executing multiyear altruistic
masquerades constitute a major threat to the Singularity, but I am
always on the lookout for ways to reduce threats to the Singularity.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT