Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

From: Peter de Blanc (
Date: Wed Jun 07 2006 - 14:51:37 MDT

On Wed, 2006-06-07 at 16:13 -0400, Mark Waser wrote:
> I'm pretty sure that I've got the science and math that I need and, as I

Okay. I supposed the opposite, not because of anything you said, but
because the base rate is so low.

> said above, I don't feel compelled to listen to everyone. However, if I
> can't get a decent consensus out of a pretty bright, educated group (or at
> least, the open-minded, bright, and educated members of a group like this),
> then it's a pretty good sign that my ideas aren't where they should be.

I think it's just hopeless. You'll never get a consensus out of a pre-
selected population this size on a new idea.

Consider that the theory of evolution is not part of the world's
consensus. Consider that the Bayes' Theorem is not part of the
scientific consensus. It isn't even part of this list's consensus! These
are ancient ideas - way older than us. The consensus lags *centuries*
behind people who think.

> It IS my contention that there is a relatively simple,
> inductively-robust (in a mathematical proof sense) formulation of
> friendliness that will guarantee that there won't be effects that *I*
> consider undesirable, horrible, or immoral. It will, of course/however,
> produce a number of effects that others will decry as undesirable, horrible,
> or immoral -- like allowing abortion and assisted suicide in a reasonable
> number of cases, NOT allowing the killing of infidels, allowing almost any
> personal modifications (with truly informed consent) that are non-harmful
> to others, NOT allowing the imposition of personal modifications whether
> they be physical, mental, or spiritual, etc.

How relatively simple? Evolution doesn't do simple. I doubt that any
human goal system has a simple mathematical formalization.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT