From: Ben Goertzel (ben@goertzel.org)
Date: Mon Feb 02 2004 - 10:34:47 MST
Hi,
Please pardon the long e-mail; I had a bunch of thoughts about these issues
while lying in bed this morning slowly drifting toward the wakeful state...
I'll begin in the abstract and get to concrete superhuman-AI issues by the
end...
One distinction that seems to be missing in our recent discussions of
ethics, is the distinction between:
-- specific ethical rules (e.g. don't kill people, be nice to humans, don't
eat the yellow snow, etc.)
-- ethical systems (an ethical system contains a set of specific ethical
rules, and also some procedures for generating new ethical rules to deal
with new situations)
-- abstract principles specifying what's desirable in the universe
This is somewhat similar to the distinction between
-- specific scientific theories
-- scientific research programmes (Lakatos) or paradigms (Kuhn)
-- abstract principles specifying what's the overall goal of science as
opposed to, say, witchcraft, juggling, prostitution or music
An example of an abstract principle specifying the overall goal of science
is Popper's dictum: "Create conjectures that have greater empirical support
than their predecessors." Note that this is an informal statement, and it
may be formalized and interpreted in many different ways. But nevertheless,
essentially everyone involved in the scientific enterprise will agree with
this abstract principle, even if they adhere to different research
programmes or interpret the terms slightly differently.
The statement that "there is no absolute morality" has two interpretations:
1) a very obvious interpretation, which is that nothing is absolute;
everything is ultimately relative and subjective, if you choose to view it
that way ... to derive anything (including a moral rule) you must assume
something...
2) a slightly less obvious interpretation, which is that even given a
reasonable set of abstract guiding principles, there are LOADS of different
ethical systems that all plausibly attemtp to achieve these guiding
principles, and it's not always so clear how to choose between them
But what are the right abstract principles for the ethical domain? --
principles analogout to Popper's "Create conjectures that have greater
empirical support than their predecessors" in science?
Jef Albright has proposed "growth" (in a very general sense -- expansion of
complexity... emergence of new patterns from old ... etc.)
Curzio has proposed "abundance of positive qualia"
Habermas proposed "free choice", in the sense that he wants individuals to
have the ability to choose which ethical systems to live by...
It seems that accepting "positive qualia" only as an abstract principle
leads to very obvious problems (its ideal universe is a single, mindless,
endless orgasm?)
On the other hand, putting it together with growth yields something less
obviously problematic.
The overriding ethical principle:
"Create situations that involve more positive experiences for sentient
minds, and more growth in the sentient and unsentient parts of the world."
seems to me to have no obvious stupidities. (Though defining "positive
experience" and "growth" are nontrivial problems to which there are
certainly many different approaches... the same is true of defining
"empirical content" in Popper's definition of science.)
Let's call this the Principle of Joyous Growth ;-)
You can add freedom and diversity in if you want, obtaining something like
"Create situations that involve more positive experiences and more
experiences of free-choice for sentient minds, and more growth and diversity
in the sentient and unsentient parts of the world."
Now, not everyone will agree that this kind of principle is good. Some
folks might prefer a principle such as
"Preserve the current way of life for humans as closely as possible."
This Principle of Human Stability is not so compatible with the Principle of
Joyous Growth... just as, for instance, the overriding goals of witchcraft
are not the same as the overriding goals of science.
There are many different ethical systems that are compatible with the
Principle of Joyous Growth. There are heuristics for choosing between them,
similar to Lakatos's heuristics for choosing between scientific research
programmers. I suggested some heuristics in a previous email, namely: good
ethical systems should
* generate new ethical rules in conjunction with progressive rather than
regressive scientific research programmes
* generate ethical rules compatible with the free-choice decisions of the
individuals who are constrained by their society to obey these rules
* generate positive experiences and growth for the individuals following the
ethical rules it generates, as well as for the universe at large
I'm sure these are not the only useful heuristics one could formulate in
this context.
These are overall criteria for assessing "ethical systems", but they are
meaningful only in the context of the Principle of Joyous Growth. In the
context of the Principle of Human Stability, for example, different criteria
for assessing ethical systems would be relevant.
In a superhuman-AI context, perhaps the most important thing -- in my
view -- is to ensure that an AI about to launch the Singularity is fully
ensconced in the Principle of Joyous Growth. One then wants to make sure it
understands how to evaluate ethical systems within the context of the
Principle. And then, finally, one wants to give it a specific initial
ethical system to start off with.
Eliezer's "Friendly AI" ethics seems to me like a specific ethical system,
which is basically compatible with the Principle of Joyous Growth. I think
that the Friendly AI ethic is a reasonably good one (though I could quibble
with Eliezer about some specifics), but I think that for an AI to have a
stable desirable ethical position through repeated self-modification, it
needs to have solid grounding in the Principle of Joyous Growth and in the
art/science of evaluating alternative ethical systems, as well as in a
particular beneficent ethical system like Friendly AI.
The Universal Mind Simulation approach that I described in a recent
document, would seem to be useful in terms of giving an AI a means to
*assess* joyous growth -- an intuitive sense for growth and positive
experience throughout the universe.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT