Re: Friendly AI in "Positive Transcension"

From: Metaqualia (metaqualia@mynichi.com)
Date: Sun Feb 15 2004 - 20:33:46 MST


> > OK, well if you can't even summarize your own work compactly, then how
> > the heck do you expect me to be able to do so??? ;-)
> I don't, of course. However, I should hope that you would be aware of
> your own lack of understanding and/or inability to compactly summarize,
> issue. It's not surprising that you see no superiority of FAI over Joyous
> Growth if you attempt to somehow interpret FAI as a specific ethical
> principle. This is actually impossible, like trying to interpret a
> mathematical system as the quantity 15, but after you've invented a

Consider this summary.

"Elizier::FAI consists in avoiding the imposition of moral supergoals (do
this, do that) and recreating the cognitive architecture that humans use to
set these moral goals for themselves (machine thinks: looks as if I should
do this, do that)"

Is it such a bad compact summary?

mq



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT