From: Ben Goertzel (ben@goertzel.org)
Date: Sun May 08 2005 - 21:37:36 MDT
> Yes, that's why I described it as a theory of Friendliness _content_
> as opposed to the (first and harder) problem of Friendliness
> _architecture_.
OK, but even as a theory of a "desirable goal state", there are BIIIIIG
unresolved issues with your idea, aren't there?
For instance, to specify the goal state we need to define the notion of
"sentience" or else trust the Sysop to figure this out for itself...
Because, I assume you want the Sysop to give each sentient being choice of
which domain to live in?
This begs the question of how to define what is a "sentient being"?
Suppose I want to create a universe full of intelligent love-slaves... and
suppose there aren't any sentients who want to live their lives out as my
love-slaves. So I create some androids that *act* like sentient
love-slaves, but are *really* just robots with no feelings or awareness....
Or wait, is this really possible? Does sentience somehow come along
automatically with intelligence? Does "sentience" as separate from
intelligence really exist? What ethical responsibilities exist to different
kinds of minds with different ways of realizing intelligence?
I'm not arguing that your idea is bad, just that when you dig beneath the
surface it turns out to be very poorly defined, for reasons that have been
discussed on this list (and elsewhere) a lot in the past...
-- Ben G
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:56 MST