Re: An essay I just wrote on the Singularity.

From: Samantha Atkins (samantha@objectent.com)
Date: Fri Jan 02 2004 - 01:47:33 MST


On Wed, 31 Dec 2003 14:21:45 -0500
"Perry E. Metzger" <perry@piermont.com> wrote:

>
> Tommy McCabe <rocketjet314@yahoo.com> writes:
> > True, but no disproof exists.
>
> Operating on the assumption that that something which may or may not
> be possible will happen seems imprudent.
>
> > If anyone thinks they
> > have one, I would be very interested. And there's
> > currently no good reason I can see why Friendly AI
> > shouldn't be possible.
>
> I can -- or at least, why it wouldn't be stable. There are several
> problems here, including the fact that there is no absolute morality (and
> thus no way to universally determine "the good"),

I do not see that there is any necessity for "absolute" morality in order to acheive Friendly AI or any necessity for a unversally determination of what is "the good". Friendliness (toward humanity) does not demand this absolute universal morality does it?

> that it is not
> obvious that one could construct something far more intelligent than
> yourself and still manage to constrain its behavior effectively, that

What I have read from Eliezer on the subject disavows any notion of constraining the behavior of the FAI explicitly.

> it is not clear that a construct like this would be able to battle it
> out effectively against other constructs from societies that do not
> construct Friendly AIs (or indeed that the winner in the universe
> won't be the societies that produce the meanest, baddest-assed
> intelligences rather than the friendliest -- see evolution on earth),
> etc.
>

An argument from evolution doesn't seem terribly germane for entities that are very much not evolved but designed and iteratively self-improved. What exactly is meant by such a loose term as "bad-ass" in this context?
 
> Anyway, I find it interesting to speculate on possible constructs like
> The Friendly AI, but not safe to assume that they're going to be in
> one's future. The prudent transhumanist considers survival in wide
> variety of scenarios.
>

But what do you believe is the scenario or set of scenarios that has the maximum survivability and benefit with the least amount of pain/danger of annihilation of self and species?

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT