Re: Collective Volition: Wanting vs Doing.

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Jun 14 2004 - 23:57:49 MDT


On Jun 14, 2004, at 1:20 PM, Eliezer Yudkowsky wrote:
>
> I see no reason why I should care about genes or memes except insofar
> as they play a role in individuals built by genes who are running
> memes. What exerts the largest causal influence is not necessarily
> relevant to deciding what is the *important* aspect of humanity; that
> is a moral decision. I do not need to make that moral decision
> directly. I do not even need to directly specify an algorithm for
> making moral decisions. I do need to tell an FAI, in a well-specified
> way, where to look for an algorithm and how to extract it; and I am
> saying that the FAI should look inside humans. There is much
> objection to this, for it seems that humans are foolish. Well, hence
> that whole "knew more, thought faster etc." business. Is there
> somewhere else I should look, or some other transformation I should
> specify?
>

Now I am confused. Our psychology was shaped by our evolution. What
we desire and see as the good is shaped by our evolution. So how can
you not care about the genes? They had no small part in determining
the memes. If the result of CV extraction and use is to be good for
human beings then isn't what type of creatures humans are (their genes)
quite relevant? Can a moral decision for the sake of a group of
entities be meaningful if it doesn't consider the nature, including the
limitations, of the entities involved? Is it assumed that if we "knew
more and thought faster" that we would no longer be running the same
evolutionary programming? Even if this is so (and I believe it is)
the AI would still need to work through the EP determined programming
of human beings in order to gradually "uplift" them into their CV
ideal.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT