From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Tue Feb 17 2004 - 11:40:14 MST
On Tue, Feb 17, 2004 at 12:52:31PM -0500, Ben Goertzel wrote:
> I am not sure that humane-ness, in the sense that you propose, is
> a well-defined concept.
(NB: Ben, this is not an attack at you; I happen to be picking on
you, but that's just random chance).
I'm fairly consistently annoyed that people worry about the
mathematical definitions of moral concepts with respect to
super-intelligent AIs. That just seems bizarre. Why would an AI of
even human-equal intelligence need every moral issue to be
mathematically tenable? Most humans think such arguments are crap;
why wouldn't an AI?
People who are comfortable with the Sysop Scenario are scared that
AIs will be too stupid to understand fuzzy arguments. These are
untenable positions to hold simultaneously, as far as I can tell.
-Robin
-- Me: http://www.digitalkingdom.org/~rlpowell/ *** I'm a *male* Robin. "Constant neocortex override is the only thing that stops us all from running out and eating all the cookies." -- Eliezer Yudkowsky http://www.lojban.org/ *** .i cimo'o prali .ui
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT