Re: An essay I just wrote on the Singularity.

From: Samantha Atkins (samantha@objectent.com)
Date: Fri Jan 02 2004 - 14:10:53 MST


On Fri, 02 Jan 2004 15:54:14 -0500
"Perry E. Metzger" <perry@piermont.com> wrote:

>
> Samantha Atkins <samantha@objectent.com> writes:
> > On Wed, 31 Dec 2003 14:21:45 -0500
> > "Perry E. Metzger" <perry@piermont.com> wrote:
> >> I can -- or at least, why it wouldn't be stable. There are several
> >> problems here, including the fact that there is no absolute morality (and
> >> thus no way to universally determine "the good"),
> >
> > I do not see that there is any necessity for "absolute" morality in
> > order to acheive Friendly AI or any necessity for a unversally
> > determination of what is "the good". Friendliness (toward
> > humanity) does not demand this absolute universal morality does it?
>
> How can one establish what is "Friendly" without it? We haven't been
> able to produce Friendly People yet on a large scale, if you haven't
> noticed. There is no universal notion of correct behavior yet among
> *humans*. Who is to say that the AI won't decide to be more
> "Friendly" towards the Islamic Fundamentalists, or towards Communists,
> or towards some other group one doesn't like, without any way to
> determine what "Friendly" is supposed to mean?

Dah. But the point I was attempting to explore is that a definition of Friendliness that covered the present and immediately forseeable situation (friendliness toward humans) might be sufficient to speak of Friendly AI. A pan-sentient definition might also be possible and even natural but may not be required in the first attempt. So it is not clear to me that "absolute" or "universal" morality, or even universal definitions of friendliness are required in order to meaningfully proceed.

> >> it is not clear that a construct like this would be able to battle it
> >> out effectively against other constructs from societies that do not
> >> construct Friendly AIs (or indeed that the winner in the universe
> >> won't be the societies that produce the meanest, baddest-assed
> >> intelligences rather than the friendliest -- see evolution on earth),
> >> etc.
> >
> > An argument from evolution doesn't seem terribly germane for
> > entities that are very much not evolved but designed and iteratively
> > self-improved. What exactly is meant by such a loose term as
> > "bad-ass" in this context?
>
> Elsewhere in the universe, there may be entities evolving now that our
> society would be forced to war with eventually -- entities that have a
> different notion of The Good. There might, for example, be an entity
> out there that wants to turn the entire universe into computronium for
> itself, and doesn't care much about taking over our resources in the
> process. Any entities we develop into or create to protect us would
> need to be able to fight successfully against such entities in order
> for our descendents to survive.
>

Well, I guess that could be seen as loosely equivalent to "baddest-assed". :-) But friendliness does not exclude being able to defend against agression.
 
> >> Anyway, I find it interesting to speculate on possible constructs like
> >> The Friendly AI, but not safe to assume that they're going to be in
> >> one's future. The prudent transhumanist considers survival in wide
> >> variety of scenarios.
> >
> > But what do you believe is the scenario or set of scenarios that has
> > the maximum survivability and benefit with the least amount of
> > pain/danger of annihilation of self and species?
>
> I have no idea. Prediction of a very chaotic system like the future
> behavior of all the entities involved here is very very difficult. At
> best I can come up with a few rules about what is likely to happen
> based on the vaguest of constraints -- for example, making the
> assumption that the laws of physics are what we think they are.

I was not asking you to predict what you think would happen but to express what it is you would like to happen and believe worthwhile to work toward bringing into being.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT