From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Mon May 09 2005 - 04:20:23 MDT
--- Ben Goertzel <ben@goertzel.org> wrote:
> Because, I assume you want the Sysop to give
> each sentient being choice of
> which domain to live in?
>
> This begs the question of how to define what is
> a "sentient being"?
>
> Suppose I want to create a universe full of
> intelligent love-slaves... and
> suppose there aren't any sentients who want to
> live their lives out as my
> love-slaves. So I create some androids that
> *act* like sentient
> love-slaves, but are *really* just robots with
> no feelings or awareness....
> Or wait, is this really possible? Does
> sentience somehow come along
> automatically with intelligence? Does
> "sentience" as separate from
> intelligence really exist? What ethical
> responsibilities exist to different
> kinds of minds with different ways of realizing
> intelligence?
So you're saying we really do need to know the
human brain much better, *even though* FAI design
will not mimic it, simply that the FAI will
understand what you're trying to protect. Isn't
it enough to give it a supergoal of "Keep the
humans alive and comfortable and don't mess with
the functioning of human brains until we know
what makes them tick"?
Tom Buckner
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:56 MST