Re: Volitional Morality and Action Judgement

From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Sun May 23 2004 - 00:05:32 MDT


Eliezer,

So, let me feed this back to you just to be sure I've understood...

---
Your definition of a sentient, or 'person', is a process that has
consciousness and possibly qualia, among other things.
Also, if you can avoid giving a FAI consciousness then you will feel much
more comfortable creating it, performing source control, etc. as there will
be no moral imperatives involved.
---
I believe you are going to have a lot of trouble tweezing general
intelligence away from consciousness.  If you can, it would be a hell of a
thing to see.  For the record: I don't want to hurt a person either.  Should
we hold up creating FAI until we know precisely what a person is?  Until we
accurately demarcate the borderline between person and non-person, do we
hesitate to create something that might be a person digitally?  If we cannot
say just exactly why we humans are also *persons*, then how can we determine
the personhood status of an AI?  You would have to simultaneously answer the
questions: "what makes a human a person" and "what makes an FAI a
(non-)person".  Again, that would be a hell of a thing to see.  Is this what
you intend?
Michael Roy Ames


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST