From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Wed Feb 28 2001 - 14:44:59 MST
Gordon Worley wrote:
> At 3:47 PM -0500 2/28/01, Eliezer S. Yudkowsky wrote:
> >Gordon Worley wrote:
> >> What I am proposing is that the end result of minds will always be
> >> the same thing. I agree that there may be gravityminds and the like
> >> along the say to the paragon mind, but there will be only one kind of
> >> paragon mind.
> >This is, beyond doubt, a possibility and a major one... but how do you
> >know that? I thought your species was too young to know that sort of
> Point. I'm extrapolating. As it stands, I'm going off of intuition
> and trying to find the most aesthetically pleasing system.
Ah. Well, good enough, then. I definitely agree that the convergence of
paragon minds is the most aesthetically pleasing possibility; it's the
possibility that I personally think I would most enjoy living in; and it's
the possibility that would most simplify present-day engineering of minds
in general and Friendliness in particular.
But alas... well, you know.
I used to have a simple and beautiful theory of AI goal systems based on
objective morality. Now I have a complex theory, based on the idea that
an AI should be able to handle any possibility at least as well as it
would be handled by a self-enhancing altruistic human - but I think the
new theory is equally beautiful. Which is important, because this is one
of those instances where beauty really counts. I'm just about certain
that any theory of Friendliness which is not beautiful is wrong. Anyone
who accepts this should be able to reject any AI motivation theories that
are based on enslavement, subordination, or other ugly and exploitative
means, but unfortunately I can give no simple proof of the importance of
In Friendly AI, the opposite of beauty is called "the adversarial
attitude", though this is never stated explicitly. Maybe it should be.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT