Re: An essay I just wrote on the Singularity.

From: Perry E. Metzger (perry@piermont.com)
Date: Fri Jan 02 2004 - 13:54:14 MST


Samantha Atkins <samantha@objectent.com> writes:
> On Wed, 31 Dec 2003 14:21:45 -0500
> "Perry E. Metzger" <perry@piermont.com> wrote:
>> I can -- or at least, why it wouldn't be stable. There are several
>> problems here, including the fact that there is no absolute morality (and
>> thus no way to universally determine "the good"),
>
> I do not see that there is any necessity for "absolute" morality in
> order to acheive Friendly AI or any necessity for a unversally
> determination of what is "the good". Friendliness (toward
> humanity) does not demand this absolute universal morality does it?

How can one establish what is "Friendly" without it? We haven't been
able to produce Friendly People yet on a large scale, if you haven't
noticed. There is no universal notion of correct behavior yet among
*humans*. Who is to say that the AI won't decide to be more
"Friendly" towards the Islamic Fundamentalists, or towards Communists,
or towards some other group one doesn't like, without any way to
determine what "Friendly" is supposed to mean?

>> that it is not
>> obvious that one could construct something far more intelligent than
>> yourself and still manage to constrain its behavior effectively, that
>
> What I have read from Eliezer on the subject disavows any notion of
> constraining the behavior of the FAI explicitly.

And I'm not very sure one could do it non-explicitly either. :)

>> it is not clear that a construct like this would be able to battle it
>> out effectively against other constructs from societies that do not
>> construct Friendly AIs (or indeed that the winner in the universe
>> won't be the societies that produce the meanest, baddest-assed
>> intelligences rather than the friendliest -- see evolution on earth),
>> etc.
>
> An argument from evolution doesn't seem terribly germane for
> entities that are very much not evolved but designed and iteratively
> self-improved. What exactly is meant by such a loose term as
> "bad-ass" in this context?

Elsewhere in the universe, there may be entities evolving now that our
society would be forced to war with eventually -- entities that have a
different notion of The Good. There might, for example, be an entity
out there that wants to turn the entire universe into computronium for
itself, and doesn't care much about taking over our resources in the
process. Any entities we develop into or create to protect us would
need to be able to fight successfully against such entities in order
for our descendents to survive.

>> Anyway, I find it interesting to speculate on possible constructs like
>> The Friendly AI, but not safe to assume that they're going to be in
>> one's future. The prudent transhumanist considers survival in wide
>> variety of scenarios.
>
> But what do you believe is the scenario or set of scenarios that has
> the maximum survivability and benefit with the least amount of
> pain/danger of annihilation of self and species?

I have no idea. Prediction of a very chaotic system like the future
behavior of all the entities involved here is very very difficult. At
best I can come up with a few rules about what is likely to happen
based on the vaguest of constraints -- for example, making the
assumption that the laws of physics are what we think they are.

-- 
Perry E. Metzger		perry@piermont.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT