From: Mark Nuzzolilo (email@example.com)
Date: Tue Feb 12 2008 - 08:19:04 MST
I interpreted differently the original premise that building an AGI can lead
to FAI. I would hope that in doing so, developers would engineer an
interface (or several) that would allow higher-level manipulaion of the
system, including behavior and goals. While this isn't as easy as snapping
your fingers, the approach seems more realistic to me than the headaching
pencil-and-paper approach to FAI. Some of us need to consider going beyond
the paranoid attitude of believing that AGI will likely be a black box with
"free will" and "interest" (especially in its development stages!!!).
On Feb 11, 2008 3:52 PM, Eliezer S. Yudkowsky <firstname.lastname@example.org> wrote:
> Why go to all the trouble of building an AI? Why not just build a
> natural-language-understander that compiles English requests to
> programs, and then type into the prompt, "Please make an AI"?
> The English-to-program-compiler is hence AI-complete, meaning that if
> you can build it, you can build an AI - hence you shouldn't expect it
> to be any easier than AI.
> Similarly, building an AI that knows what you "really mean" by
> "Friendly" when you type "Please make a Friendly AI" at the prompt, is
> FAI-complete, and not any easier than building a Friendly AI.
> (I find that conversations of this sort have more the shade of someone
> trying to figure out how to game the Dungeons and Dragons rules for
> the wish spell, than AI science... remember, nothing ever runs on
> English rules; even your brain doesn't run on English rules.)
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT