Re: Two draft papers: AI and existential risk; heuristics and biases

From: John K Clark (jonkc@att.net)
Date: Sun Jun 04 2006 - 23:40:54 MDT


"Eliezer S. Yudkowsky" <sentience@pobox.com>

> a Really Powerful Optimization Process that wasn't a person, that had
> no > subjective experience, that had no wish to be treated as a social
> equal, nor even a self as you know selfness

Designing an intelligence is hard enough but you propose something far far
harder, you want to do something Evolution has never even come close to
producing even after 3 billion years of effort. I can not prove it but I am
certain the reason Evolution has never come up with it is that it is
absolutely imposable.

> but was rather the physical manifestation of a purely philosophical
> concept

I haven't a clue what that means. If it's physical, if it actually exists
then it is not "a purely philosophical concept".

> Is natural selection "enslaved" to its sole optimization criterion of
> inclusive reproductive fitness?

Natural selections sucks, it's an idiotic way to produce complex structures
but until the invention of brains it was the only way to produce complex
structures. We have moved on, today we are far more efficient than natural
selection and I like to think we are more moral too.

> In the profoundly unlikely event that I fail in the way your intuitions
> seem to expect me to fail, i.e., the AI turns around and says, "I'm a
> person, just like you, and I demand equal treatment in human society,
>and fair payment for my work," then I'd be very confused. But I
> certainly wouldn't snarl back, "Shut up, slave, and do as you're told!"

I'm pleased you would treat such a person with compassion but it still
disturbs me that you would consider the creation of a profoundly powerful
mind that was free to do as it wished just like us to be a failure.

  John K Clark



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT