From: Perry E. Metzger (perry@piermont.com)
Date: Fri Jan 02 2004 - 15:22:40 MST
Samantha Atkins <samantha@objectent.com> writes:
> On Fri, 02 Jan 2004 15:54:14 -0500
> "Perry E. Metzger" <perry@piermont.com> wrote:
>
>>
>> Samantha Atkins <samantha@objectent.com> writes:
>> > On Wed, 31 Dec 2003 14:21:45 -0500
>> > "Perry E. Metzger" <perry@piermont.com> wrote:
>> >> I can -- or at least, why it wouldn't be stable. There are several
>> >> problems here, including the fact that there is no absolute morality (and
>> >> thus no way to universally determine "the good"),
>> >
>> > I do not see that there is any necessity for "absolute" morality in
>> > order to acheive Friendly AI or any necessity for a unversally
>> > determination of what is "the good". Friendliness (toward
>> > humanity) does not demand this absolute universal morality does it?
>>
>> How can one establish what is "Friendly" without it? We haven't been
>> able to produce Friendly People yet on a large scale, if you haven't
>> noticed. There is no universal notion of correct behavior yet among
>> *humans*. Who is to say that the AI won't decide to be more
>> "Friendly" towards the Islamic Fundamentalists, or towards Communists,
>> or towards some other group one doesn't like, without any way to
>> determine what "Friendly" is supposed to mean?
>
> Dah. But the point I was attempting to explore is that a definition
> of Friendliness that covered the present and immediately forseeable
> situation (friendliness toward humans) might be sufficient to speak
> of Friendly AI. A pan-sentient definition might also be possible
> and even natural but may not be required in the first attempt. So
> it is not clear to me that "absolute" or "universal" morality, or
> even universal definitions of friendliness are required in order to
> meaningfully proceed.
I will grant that it is possible that someone will come up with a
working definition of "Friendly" that is good enough, and a way to
inculcate it into an AI they are building so deeply that it won't
slip. I similarly grant that it is possible some talented person will
come up with an algorithm that solves the traveling salesman in
polynomial time. I'm not holding my breath, though.
I put it that way because this is not a new argument. The argument
over the nature of "the good" goes back thousands of years. I could
easily hand anyone who liked 50 fine books produced over the last
2500 years -- from The Republic through stuff written in the last year
or two -- exploring the question of how to make decisions about what
is and isn't "moral" or "good", and no one has made much progress to
the goal, though they've explored lots of interesting territory.
Absent a way to determine if, say, eating a cow is immoral, there will
be no way for The Friendly AI to determine if it should be protecting
cows from being eaten -- doubtless the PETA types would argue that it
is fundamental that they should not be, and the folks at Ruth's Chris
would argue otherwise, and perhaps they would both petition The
Friendly AI for resolution, only for none to be achieved.
That might seem to be a fairly uninteresting dispute to you, but I
assure you that there are thousands of them floating around out there
waiting to bite the Friendly AI project in the gluteus maximus hard
enough to draw blood.
>> >> it is not clear that a construct like this would be able to battle it
>> >> out effectively against other constructs from societies that do not
>> >> construct Friendly AIs (or indeed that the winner in the universe
>> >> won't be the societies that produce the meanest, baddest-assed
>> >> intelligences rather than the friendliest -- see evolution on earth),
>> >> etc.
>> >
>> > An argument from evolution doesn't seem terribly germane for
>> > entities that are very much not evolved but designed and iteratively
>> > self-improved. What exactly is meant by such a loose term as
>> > "bad-ass" in this context?
>>
>> Elsewhere in the universe, there may be entities evolving now that our
>> society would be forced to war with eventually -- entities that have a
>> different notion of The Good. There might, for example, be an entity
>> out there that wants to turn the entire universe into computronium for
>> itself, and doesn't care much about taking over our resources in the
>> process. Any entities we develop into or create to protect us would
>> need to be able to fight successfully against such entities in order
>> for our descendents to survive.
>
> Well, I guess that could be seen as loosely equivalent to
> "baddest-assed". :-) But friendliness does not exclude being able to
> defend against agression.
Maybe, maybe not. It seems the way we learned how to defend against
aggression was by learning to be highly unfriendly to anything that
got in our way. Perhaps there are other paths to such things -- I
don't know.
>> >> Anyway, I find it interesting to speculate on possible constructs like
>> >> The Friendly AI, but not safe to assume that they're going to be in
>> >> one's future. The prudent transhumanist considers survival in wide
>> >> variety of scenarios.
>> >
>> > But what do you believe is the scenario or set of scenarios that has
>> > the maximum survivability and benefit with the least amount of
>> > pain/danger of annihilation of self and species?
>>
>> I have no idea. Prediction of a very chaotic system like the future
>> behavior of all the entities involved here is very very difficult. At
>> best I can come up with a few rules about what is likely to happen
>> based on the vaguest of constraints -- for example, making the
>> assumption that the laws of physics are what we think they are.
>
> I was not asking you to predict what you think would happen but to
> express what it is you would like to happen and believe worthwhile
> to work toward bringing into being.
I would like to see strong nanotechnology and IA technologies, because
I could apply them to my own personal survival, but beyond that, I
don't know what the spectrum of things that could happen are, or how I
might choose among them meaningfully.
I don't pretend I have the foresight to be able to guide history into
a direction I would like -- I don't even pretend to be able to guide a
small company with any certainty and I have at least operated those
enough to have understanding of the problem and feel like I can do a
reasonable job at it. The variables involved with an entire society on
the scale of the one we have are beyond my comprehension. That's why
I'm a libertarian, not a central planning freak.
-- Perry E. Metzger perry@piermont.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT