From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Tue Jun 06 2006 - 15:27:38 MDT
On Tue, Jun 06, 2006 at 04:11:00PM -0400, Martin Striz wrote:
> On 6/6/06, Robin Lee Powell <rlpowell@digitalkingdom.org> wrote:
>
> >Again, you are using the word "control" where it simply does not
> >apply. No-one is "controlling" my behaviour to cause it to be
> >moral and kind; I choose that for myself.
>
> Alas, you are but one evolutionary agent testing the behavior
> space. I believe that humans are generally good, but with 6
> billion of them, there's a lot of crime. Do we plan on building
> one AI?
Who knows? But the argument tends to be that *any* AI would see
morality as a constraint or a controlling measure; something to be
solved.
> I think the argument is that with runaway recursive
> self-improvement, any hardcoded nugget approaches
> insignificance/obsolesence. Is there a code that you could write
> that nobody, no matter how many trillions of times smarter,
> couldn't find a workaround?
Clearly not; the point is that if the being *wants* to find a
workaround to their own morality, they are not moral in the sense I
use the word.
In fact, any being that wants to find a workaround to its own core
morality is almost definitionally insane. Sounds like borderline
personality disorder to me, if not actual sociopathy.
-Robin
-- http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/ Reason #237 To Learn Lojban: "Homonyms: Their Grate!" Proud Supporter of the Singularity Institute - http://intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT