Re: The Future of Human Evolution

From: Keith Henson (hkhenson@rogers.com)
Date: Mon Sep 27 2004 - 06:21:30 MDT


At 05:05 AM 27/09/04 -0400, you wrote:
>Maru wrote:
>>I think I'm misunderstanding something here: If you can't construct an AI
>>to 'accomplish an end for which you do not possess a well-specified
>>abstract description', then what is the point of Collective Volition?
>>Isn't the idea for that to have the AI make the decision you would have
>>made if you had understood just what you actually wanted (which sounds
>>like something not very well-specified)?
>
>If it's not well-specified, you can't do it.

Right. Consider what happened with the long, long list of poorly specified
software projects.

>The point of "collective volition" is to well-specify how to look at a
>human being (or, rather, a group of human beings) and construe a
>reasonable definition of what they "actually want".

If you think about it, that's not a safe approach.

Problem is that what humans "actually want" has been shaped by Stone Age
evolution. What they want varies depending on external conditions, not
just the obvious ones like heat when it is cold, but humans "actually want"
to make war on neighbors when they have either been attacked *or* have
become convinced that they face "looming privation" (for lack of a better
word) due to their perception of falling income per capita.

Of course you could specify that the AI should only figure out what
unstressed humans want. The problem with that is that they "actually want"
things (like children) that set up the conditions that lead to lead to
stress and eventually wars.

And they "actually want" their minds not to be messed with, which makes it
hard to edit out either traits leading to wars or children.

Interesting, maybe intractable, problem stated this way.

Keith Henson



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT