From: jasonjoachim (jasonjoachim@yahoo.com)
Date: Mon Mar 31 2003 - 08:38:57 MST
--- Samantha Atkins <samantha@objectent.com> wrote:
> Wait just a second. You do support the right of
> sentients to
> self-determine including the right to tell the
> Friendly AI to
> stay out of their affairs unless they ask for its
> help, I
> believe. If so then some suffering is perfectly
> consistent with
> a Friendly AI as such. The question then becomes
> what happens
> when the sentient does ask for an end to their
> suffering. I am
> not at all sure that it would be in the sentients
> best interest
> and thus truly friendly for the FAI to simply fix
> anything and
> everything in the sentients space or nature that led
> to the
> suffering. Remember that the cause of much
> suffering of a
> sentient is due to internal characteristics,
> beliefs,
> programming, whatever of said sentient. To simply
> remove/change
> all of those immediately would likely damage the
> identity matrix
> of the sentient and/or have many unforeseen (by the
> sentient)
> consequences not desired. So again, it is not at
> all obvious
> that the FAI would remove all suffering. Medieval
> torture
> chamber yes, rewiring brains to not be instrumental
> in their own
> suffering? I have strong doubts that would be
> unambiguously moral.
>
>
> - samantha
By what possible mechanism would you determine an
individuals "best interests"? How would you like this
to be concluded for you? And might the method of
conclusion vary between individuals then?
Help isn't supposed to have unintended consequences.
And the ideal is that the "intentions" are your own.
That's what "help" is.
So the question is, "Why would active help be
undetectable?" That's sure not an intention of mine.
Jason Joachim
__________________________________________________
Do you Yahoo!?
Yahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop!
http://platinum.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT