Re: An essay I just wrote on the Singularity.

From: Samantha Atkins (samantha@objectent.com)
Date: Fri Jan 02 2004 - 19:41:13 MST


On Fri, 02 Jan 2004 17:22:40 -0500
"Perry E. Metzger" <perry@piermont.com> wrote:
> > Dah. But the point I was attempting to explore is that a definition
> > of Friendliness that covered the present and immediately forseeable
> > situation (friendliness toward humans) might be sufficient to speak
> > of Friendly AI. A pan-sentient definition might also be possible
> > and even natural but may not be required in the first attempt. So
> > it is not clear to me that "absolute" or "universal" morality, or
> > even universal definitions of friendliness are required in order to
> > meaningfully proceed.
>
> I will grant that it is possible that someone will come up with a
> working definition of "Friendly" that is good enough, and a way to
> inculcate it into an AI they are building so deeply that it won't
> slip. I similarly grant that it is possible some talented person will
> come up with an algorithm that solves the traveling salesman in
> polynomial time. I'm not holding my breath, though.
>

As a Friendly AI may be necessary to our survival, it presumably has a higher priority than the traveling salesman problem. While holding one's breath is not advisable, putting energy into the problem is.

 
> I put it that way because this is not a new argument. The argument
> over the nature of "the good" goes back thousands of years. I could
> easily hand anyone who liked 50 fine books produced over the last
> 2500 years -- from The Republic through stuff written in the last year
> or two -- exploring the question of how to make decisions about what
> is and isn't "moral" or "good", and no one has made much progress to
> the goal, though they've explored lots of interesting territory.
>

Yes indeed. A lot of the arguments fall apart and some seem promising. Perhaps they are all lacking. Perhaps we humans aren't even smart enough to ask the question cleanly enough or to fully support a "good-enough" answer even if we stumbled upon it. But this does not mean the question is fundamentally and forever unanswerable.
 
> Absent a way to determine if, say, eating a cow is immoral, there will
> be no way for The Friendly AI to determine if it should be protecting
> cows from being eaten -- doubtless the PETA types would argue that it
> is fundamental that they should not be, and the folks at Ruth's Chris
> would argue otherwise, and perhaps they would both petition The
> Friendly AI for resolution, only for none to be achieved.
>

I would guess that the AI would point out that eating cows is not at all necessary post-singularity and to forbid the behavior toward a possibly upliftable sentient. Not sure we would even have cows. But I take your point.
 
> > I was not asking you to predict what you think would happen but to
> > express what it is you would like to happen and believe worthwhile
> > to work toward bringing into being.
>
> I would like to see strong nanotechnology and IA technologies, because
> I could apply them to my own personal survival, but beyond that, I
> don't know what the spectrum of things that could happen are, or how I
> might choose among them meaningfully.
>

So, do you have anything you care about beyond your own personal survival? Any preferences for the type of world you live in or the kind of company that may or may not be around, for instance?

 
> I don't pretend I have the foresight to be able to guide history into
> a direction I would like -- I don't even pretend to be able to guide a
> small company with any certainty and I have at least operated those
> enough to have understanding of the problem and feel like I can do a
> reasonable job at it. The variables involved with an entire society on
> the scale of the one we have are beyond my comprehension. That's why
> I'm a libertarian, not a central planning freak.
>

You present a false dichotomy between inability and having to have near godlike foresight to make much difference; between being a libertarian and being a "central planning freak". I think it is in each person's range of responsibility to consider to the extent of their abilities what kind of world they wish to inhabit and to do what they can to acheive it. It doesn't take any pretense or super-ability to do what one can guided by one's best knowledge and values to the extent of one's abilities.

If we don't work at least in part at the level of envisioning what we want then how in the hell do we expect to have any chance of getting there?

- s
>
> --
> Perry E. Metzger perry@piermont.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT