From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Aug 02 2001 - 18:39:07 MDT
James Higgins wrote:
>
> If you were talking about our ability to create a friendly AI, we
> agree. However, the AI will have to evolve many, many times in order to
> become an SI. During any one of these evolutions it could, intentionally
> or not, remove or hamper friendliness. Some of these could entail a
> complete, from the ground up rewrite, using none of the original code and
> only hand-picked logic/data. Friendliness, as a requirement, could easily
> fall out during such a transition. It could decide that it would be better
> off without some of the code/data that is part of friendliness. Further,
> it could at some point ponder why it is supposed to be friendly at all. It
> could decide that being friendly to humans is not a top priority, or that
> how to be friendly should be completely different than what we envision.
Issues like these are specifically the whole point of "Creating Friendly
AI". I "get" the Singularity, okay? I'm sort of proverbial for that.
And that full sense of the Singularity, as described in "Staring into the
Singularity", true unknowability, is what I had in mind when I wrote
CFAI. A Friendly AI is a human-equivalent philosopher. If the
post-Singularity world is best handled by a Sysop Scenario and volitional
Friendliness, then that's what a Friendly AI can do. If the
post-Singularity world is totally unknowable in terms of our philosophy
and we the creators were way off base in imagining what would need to be
done and why, if our imagining of the future is no better in absolute
terms then the imaginings of twenty thousand years back, then a Friendly
AI is supposed to be able to handle that too!
The post-Singularity world is closed to our vision. But humans are alive
here and now. Humans are not unknowable in the way that transhumans are.
So while there's a sense in which we don't know whether the ultimate
challenge of the Singularity can be successfully faced at all, I can still
stake and defend the claim that a created mind of a certain type can face
down that challenge as well as any human.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT