From: Thomas Buckner (firstname.lastname@example.org)
Date: Mon May 09 2005 - 04:20:23 MDT
--- Ben Goertzel <email@example.com> wrote:
> Because, I assume you want the Sysop to give
> each sentient being choice of
> which domain to live in?
> This begs the question of how to define what is
> a "sentient being"?
> Suppose I want to create a universe full of
> intelligent love-slaves... and
> suppose there aren't any sentients who want to
> live their lives out as my
> love-slaves. So I create some androids that
> *act* like sentient
> love-slaves, but are *really* just robots with
> no feelings or awareness....
> Or wait, is this really possible? Does
> sentience somehow come along
> automatically with intelligence? Does
> "sentience" as separate from
> intelligence really exist? What ethical
> responsibilities exist to different
> kinds of minds with different ways of realizing
So you're saying we really do need to know the
human brain much better, *even though* FAI design
will not mimic it, simply that the FAI will
understand what you're trying to protect. Isn't
it enough to give it a supergoal of "Keep the
humans alive and comfortable and don't mess with
the functioning of human brains until we know
what makes them tick"?
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
This archive was generated by hypermail 2.1.5 : Mon May 20 2013 - 04:00:48 MDT