Does Friendliness Structure constrain Content and visa versa? [was: RE: AGI Prototying Project]

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Wed Feb 23 2005 - 20:45:52 MST


>Needless to say these
>should be alternative ways of using a seed AI to work
>out what the
>best environment for humanity's future to unfold in
>is, not personal
>guesses on the Four Great Moral Principles For All
>Time; any mechanism
>choosen should be one that (rational) people who
>believe in Great
>Principles could look at and say 'yeah, that's bound
>to either produce
>my four great principles, or fail harmlessly' (e.g.
>if Geddes was right,
>superintelligent CV extrapolations would find the
>objective morality).

Since I'm a believer in Universal Volition, I
certainly think that Collective Volition is a valid
concept. Indeed as you point out, extrapolating CV
should pick up my Universal Morality, although this
would be a hugely inefficient and un necessary way of
getting to it.

The reason I think CV can't be calculated by a
Singleton though, is computational intractability and
qualia. The more and more info is coming into the
system, the faster the combinatorial explosion. The
only way to avoid intractability would be to apply
more and more ingenuity and deviate more and more from
the Bayesian ideal to get clever approximations and
short-cuts. But what if this 'ingenuity' and
'deviation from the Bayesian ideal' is exactly what
gives rise to qualia? In that case, RPOP couldn't
escape intractability without becoming a volitional
entity. That would screw everything up, since RPOP
would have to include its own volition in the
calculation of CV in order to make predictions,
causing a final infinite regress. To sum up: CV
isn’t ever gonna work man!

I do think I know what the 4 great moral principles
are: I can even tell you their order of importance.
Ranked starting with most important value first here
they are:

1 Growth
2 Altruism

3 Happiness
4 Health

The combination of values 1-2 is equivalent to
Volition (Friendliness structure). The combination of
values 3-4 is equivalent to Eudaimonia (Friendliness
content). I shall now state my fundamental theorem of
morality (which I'll try to briefly explain shortly):

Morality = Eudaimonia x Volition

I busted my balls for 2 years to finally get to this.
These values are not something I just pulled out of my
arse. If you check out my rough 8-level schematic of
intelligence, you'll see that I think these values
emerge in a natural way from the operations and
interactions of the various levels of intelligence.
So I think that Universal Values are 'emergent' from
general abstract properties that all Sentient minds
have in common.

Of course all I've managed to do so far is come up
with intuitive arguments and state a plausible
intuitive conjecture about morality. I haven't
actually proved anything yet. I had to bust my balls
for two years just to manage to state the conjecture
;)

>Bzzzt, wrong. This is the mistake Eliezer made in
>1996, and didn't
>snap out of until 2001. Intelligence is power; it
>forces goal
>systems to reach a stable state faster, but it does
>nothing to
>constrain the space of stable goal systems. It may
>cause systems
>to take more moral seeming actions under some
>circumstances due
>to a better understanding of game theory, but this
>has nothing
>to do with their actual preferences.

Actually I think the Objective Morality idea is
correct, although making it work is probably rather
more subtle than a naive attempt to equate facts with
values.

I'm inclined I think that values and morals are coming
from higher levels of intelligence above ordinary
reasoning. If I'm right the relationship between
facts and values is analogous to the relationship
between the mind and the brain. The mind is not the
brain, but the mind is totally *dependent on* (caused
by) the processes taking place in the brain.
Similarly, although morals/values are not the same
thing as facts, they are totally *dependent on* facts.
 So objective morality could be fully *described by*
(or decomposed) into clusters of facts.

If you think of the mind as analogues to a sort of
mini 'Collective Volition' (the 'society of mind' aka
Minsky so to speak) then it does seem very plausible
that morality should tend to be correlated with
intelligence. According to the CV idea goodness
coheres and hate washes out on the social level. Why
should it be any different on the individual level?
The more rationality is being pumped into the goal
system, the more different it should be to be evil and
still have a stable goal system. Good tends to
reinforce the stability of the goal system; evil is
tending to rip it apart. So it would seem that an
evil super-intelligence would be very prone to mental
instability.

There is also a suggestive analogy here with the
coherence theory of truth - with inconsistency being
analogous with evil. Adding on more and more axioms
to a system of logic makes it harder and harder to add
falsehoods and still maintain a consistent system.
Similarly adding on more and more intelligence should
make it harder and harder for the goal system to do
evil things and still remain stable.

In my view these analogies are strongly suggestive of
objective morality.

In terms of 'Structure' versus 'Content' I have to ask
whether these are really two separate things when it
comes the mind? Of course ordinarily, we think of
Structure and Content as two separate things, but
perhaps the mind is a very peculiar type of function
in which Structure and Content constrain each other?
I'm definitely inclined to think that in the case of
morality (Friendliness) Content determines Structure
and Structure constrains Content. If I'm right then
solving the problem of Friendliness Structure should
also automatically give you the solution to Content
and visa versa. Indeed, I think this is the exactly
the condition required for my Universal Morality to
exist.

To sum up the Mind, we could say that 'Reason' is the
Content of the Mind, and 'Morality' is the structure
of the Mind. So Morality is a function which operates
on input 'Reason' (a system of axioms and rules of
logic) thusly;

Mind = function Morality (Reason).

Now here is where I want to suggest my big trick.
Watch. I say that;

Mind = Morality x Reason

At first this seems to be gibberish. I'm trying to
multiply a function (Morality) with its input data (a
system of Reason) and saying its equivalent to the
Mind. But I do suspect that this is precisely the
condition required for Structure to constrain Content
and visa versa. The system of Morality locks down the
system of Reasoning and visa versa. Thus Objective
Morality.

Let's go to Friendliness. Refer to my 8-level model
of intelligence on the wiki again:

http://www.sl4.org/wiki/TheWay

The Content of Friendliness can be equated with what I
called 'Eudaimonia' (a system of positive values).
The Structure of Friendliness can be equated with what
I called 'Volition'. So:

Friendliness = function Volition (Eudaimonia)

Friendliness Structure is a function (Volition)
describing how Friendliness Content - a system of
values (Eudaimonia) evolves over time. Again, let me
now suggest my big trick:

Friendliness = Volition x Eudaimonia !!!!

The Structure of Friendliness can be multiplied by the
Content and the result is equivalent to Friendliness.
I'm hoping that this condition causes Content to
constrain Structure and Structure to constrain
Content. If I'm right the Structure 'locks down' the
Content and visa versa. Hopefully some sort of
Universal Morality has to emerge from this.

All this is a lot of big 'ifs' and intuitive
hand-waving of course.

=====

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT