From: Ben Goertzel (ben@goertzel.org)
Date: Sun Nov 24 2002 - 12:05:33 MST
Gordon Worley wrote:
> Nope, I think you misunderstand here.
>
> Friendliness is a level up from morals: meta-morality. Friendliness
> is, among other things, a way to find the right morals/ethics,
> regardless of what they might be. You could start off by teaching a
> seed AI Friendliness-content that was very similar to the moral
> thinking of Eliezer, but if Eliezer is wrong (and it's very likely that
> he, like all humans, is to some degree wrong, and if he's completely
> correct it was an accident) it won't matter because the FAI is capable
> of figuring out how to fix its understanding of morals to be aligned
> with the morality of the Universe.
>
> Maybe we've got it all wrong and what we have been calling good is
> actually bad, like when demons in hell turn the words around and
> consider bad things good. It seems doubtful, but it is a possibility,
> even if small. If that's the case, an FAI could figure it out.
Well, Eliezer and I have argued extensively about such issues before and
have never really seen eye-to-eye on them. So it's not surprising if you
and I also disagree ;)
It may be that my view of these things is sufficiently different that I
shouldn't use the term "Friendly AI" and should use a different term.
What I'm really after is a superhumanly intelligent AI that
a) values humans & sentient life generally
b) is highly aware of ethical issues and their difficulty and importance
I.e., an "Ethically Aware, Human-Friendly AI."
I am not at all sure there's such a thing as "the morality of the universe."
However, I believe the notion of an "Ethically Aware, Human-Friendly AI" is
meaningful and worthwhile whether or not a "universal morality" exists.
When you say "Maybe ... what we have been calling good is actually bad", I
am tempted to interpret your statement as "Maybe our specific ethical codes
of behavior are actually inconsistent with our abstract ethical goals." Of
course this kind of inconsistency is possible even with the best of
intentions, because predicting the long-term outcome of a certain type of
behavior can be a difficult problem.
But I think you mean the statement a different way -- you, like Michael Roy
Ames, seem to believe that there is some True and Universal Moral Standard,
which an FAI will find....
Well, maybe it will. I'm not confident either way....
But I am confident that if such a thing does exist and is found by a
superhumanly intelligent AI, it will transcend in many ways our human
concepts of "morals" and "ethics" -- both fulfilling and disappointing the
ideas and feelings elicited in our human minds by the phrase "Universal
Morality" ...
My work toward an Ethically Aware, Human-Friendly, superhumanly intelligent
AI is independent of the outcome of philosophical debates about the
existence or otherwise of universal morality.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT