Re: General summary of FAI theory

From: Stefan Pernar (stefan.pernar@gmail.com)
Date: Tue Nov 20 2007 - 18:40:19 MST


On Nov 21, 2007 5:20 AM, Thomas McCabe <pphysics141@gmail.com> wrote:

> SL4 is supposed to be for advanced topics in futurism, not endlessly
> rehashing the basics. Some of the things which have already been
> covered years ago, and are therefore ineligible for rehashing:
> <SNIP>
>

I would like to propose a different approach to the friendliness problem.
The CEV concept of friendliness is comparable to the consensual ethics
approach in morality. The core question being:

What must I want an AI to do? Or putting it in the words of Immanuel Kant:
What is a non contradicting maxim for my actions? You probably have heard of
Kant's categorical imperative. It goes something like this:

"Act only according to that maxim whereby you can at the same time will that
it should become a universal law."

The trick lies in formulating your maxim or put in AI friendly terms:
defining a utility function. To satisfy the categorical imperative it must
not lead to contradictions when assuming it as a universal law guiding ones
actions. Consider the following utility function:

ensure continued co-existence

Can I want to have it as a universal law? The answer can either be yes or
no. Let's consider the consequences of both:

Yes: this would mean I must ensure my own existence as well as the existence
of others -> no contradiction
No: I want to be destroyed, but once I am destroyed I can not contribute to
any utility -> contradiction

How about others? Can another object to me having this goal?

Yes: this would consequentially imply the desire for self destruction, but
once I am destroyed I can not contribute to any utility -> contradiction
No: this would mean that other must ensure my existence -> no contradiction

The key lies in finding a compromise between the self and others. If you
prefer a more technical explanation I wrote a paper on the matter -
Practical Benevolence - a Rational Philosophy of Morality - that is
available under:

http://rationalmorality.info/wp-content/uploads/2007/11/practical-benevolence-2007-11-17_isotemp.pdf

Looking forward to your feedback.

Many thanks,

Stefan

-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT