Re: Volitional Morality and Action Judgement

From: Samantha Atkins (samantha@objectent.com)
Date: Wed May 19 2004 - 22:50:00 MDT


On May 17, 2004, at 8:50 PM, Keith Henson wrote:

>
> The most fundamental actions a person can take involves reproduction.
> I am personally *extremely* uncomfortable because the logic and my
> personal feelings are in deep conflict. If there is unlimited
> reproduction or even replication in a limited environment, eventually
> the population is reduced to extreme material poverty. They just
> don't have enough atoms available.

What does logic or your feelings have to do with it? We are fast
developing the ability to reprogram ourselves. We need not be bound by
any supposed past evolutionary imperative that no longer serves what we
choose to do next. Reading EP and understanding it should not lead to
believing we are not increasingly free to re-write that programming.
It sometimes seems to me that even SL4 folks aren't fully cognizant of
the degree of incredible pending freedom.

>

> Now, with our memes shaped by a number generations of relative plenty,
> we think that killing off the neighboring tribe's males and taking
> their resources and women is double-plus-ungood. But if it comes down
> to strong restrictions on breeding or an occasional bout of slaughter
> and be slaughtered, which do you pick? The simplest math will tell
> you the human race will be forced to picking one or the other, either
> by our own volition or that imposed by an AI.
>

It quite obviously comes down to no such thing unless we are foolish
enough or simply too slow to grasp and exercise the potentials now
before us. Why waste valuable time attempting to choose between two
obviously inferior positions in the space of all possibilities?

> (My personal preference is the third way, leave for the far side of
> the galaxy and let others figure out what to do.)

This is also an overly limited and limiting choice in my opinion.

>
>> Keith, when you wrote: "...understanding these [Ev.Psyc.] matters
>> might be
>> essential to providing the environment in which friendly AI can be
>> developed."
>> -- Sort of. It is not the environment that will be improved, but the
>> accuracy of the FAI's human-cognition model.
>
> I wasn't clear as to what I meant. AI research requires considerable
> technologic support. An environment that because of massive resource
> wars lacked computers and even food for the researchers would not be
> conducive to much progress.
>

Even without improving the human creatures and even without AI or MNT,
there is no need for a resource war, especially in energy. It is more
likely we will wreck our economies (as we are doing a fine job of in
the US) and act so belligerently as to bring on a war of most other
countries against the US. But it will not be over resources per se
although that may be the most apparent "explanation".

>> It is very important that an
>> FAI understand the ways in which human think so that it can better
>> model the
>> future, and better understand the human-generated data that will be
>> presented to it. It is not enough for an FAI to determine that Johnny
>> behaves with an approximation to Bayesian rationality 82.6% of the
>> time.
>> FAI needs to know what Johnny is mentally doing the other 17.4%, and
>> why,
>> and in what situations his cognition is likely to switch between
>> modes.
>

The implicit assumption that humans will remain relatively static must
also be overcome.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT