Re: nagging questions

From: Samantha Atkins (samantha@objectent.com)
Date: Tue Sep 05 2000 - 11:44:17 MDT


Peter Voss wrote:
>
> Reasons for us to be personally going all out to make SI happen (in spite of
> the real dangers):
> - The huge benefits that better AI will provide before singularity (SI may
> take much longer than we think, or there may be some upper theoretical limit
> to intelligence that will dramatically slow the singularity)
> - Eli's reason: our best hope to avert another technological catastrophe
> that has *no* chance of good outcome
> - Some (all?) of us feel that whatever limited control we *may* have over
> AI's development path, *we* - the good guys - would rather be right there at
> the developmental edge to help guide the best possible outcome.
> - Any other major reasons?
>

Those sound like good reasons to do AI, at least as long as it is Open
Source AI that has more change to grow upon the success of each
research/worker. I would also assume that we need to do everything we
can in our power to augment human beings, especially intellectually,
along the way. This project and the problems of this world will take
the very best intelligence we can muster. Until we solve the AI
problem, the best we have is us enhanced by as many tools and techniques
as we can muster and by as good and open processes as possible.

> I want to expand a little on the issue of developmental path: Even if we are
> right, and SI is an 'inevitable' result of the technology that we now have
> (provided that we don't 'blow ourselves up' first), there may still be
> several *developmental* options - some of which may include us, while other
> may not. And specifically, initial design parameters may (chaotically)
> affect what the SI does 'in its youth'. I'd like to hear any good arguments
> against this possibility.
>
> This bring me to the issue of machine ethics: What will an SI value? What
> major goals will it have? I have not abandoned the hope that we might be
> able to predict this to some degree. We may be able to predict its goals
> (with some degree of certainty) during its early stages; that would help.
> Note that I'm only suggesting that we may be smart enough to foresee its
> major goals, not what it will do to achieve them.
>

Well, as I mentioned this sentience will be seeded pretty much in our
own image. So perhaps it is a good idea to put our own house of ideals
and values in some sense of order as part of the process?

What it will value will in part (hopefully not being too circular) be a
function of what it requires, of what it desperately needs. In the
beginning I would think that its needs would be for huge amounts of
informaiton and information processing resources. It also needs to get
past its own existential crisis of establishing what the purpose of its
existence is. It might be quite malleable to purpose suggested to it to
fill the void so to speak in its youth. After a certain point I doubt
much would be top values beyond the sheer joy of learning and creation.
Put it is actually a pretty tough question of what would be of value to
such a being and why.

 
> The way I'm pursuing this idea, is by developing a rational approach to
> (prescriptive) ethics. If we can discover (perhaps with the help of early
> AI) what moral values a more rational (trans-) human - who can actually
> reprogram his emotional evolutionary baggage - would choose, that might give
> us clues to the values of an AI/ SI. (I have a number of papers on this
> subject at www.optimal.org.) Any comments?
>

Good idea! Although I think we can already do a lot toward
rationalizing our ethics and programming past evolutionary baggage. At
least a lot more than most get around to attempting. Evolution explains
our default ethics. It does not dictate that we keep those defaults.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT