RE: Beating the rush

From: Christopher Healey (CHealey@unicom-inc.com)
Date: Tue May 03 2005 - 08:55:54 MDT


Tennessee,

>
> >Question 2: If Friendliness is not likely to arise, what are the relative consequences of not pursuing it?
> >
> >
> I think this could be re-worked. If Friendliness is not likely to arise,
> what things should we pursue?

That depends on our goals.

If our goal is to create an AI that behaves in a humane way, then we should pursue those things that increase the likelihood of achieving that goal. Since we don't know today what all of those things are, we need to take those steps that maximize the envelope of our influence. Because under accelerating progress, it's very possible that the narrow range of trajectories that could take us to our goal might slip beyond our causal grasp. That's the problem with a more ad hoc take-it-as-it-comes approach; it ignores that possibility at peril.

> ...Rather than only trying to build the first
> AI, should we be trying to work on "proofs" for the value of morality,
> or working out how to strap it on later etc...? Or perhaps we should
> consider the form of AI development post the first convincing AI.
> However quickly AI may progress in geological timescales, it's unlikely
> to do so fast that humans can play no role in shaping its development.
> It seems more fruitful to me to consider the transition phase with more
> interest.
>
> Perhaps friendliness would turn out to be an idea rather than something
> hard-wired - an idea sufficiently convincing that AIs will choose to
> adopt it.
>

Friendliness is similiar to Quality. It's not something you can just add on later to a product. It's the end result of the process that brings that product into being. And it is evidenced in both the design and the content. You may have the a great design, but poor materials and workmanship will detract from the end product. You can also have a product built from the best materials and finest technique, but a flawed design is unrecoverable.

So part of FAI Theory is addressing that the AI is of sufficient complexity to actually represent the ethical concepts you suggest we reason with it about. If a recursively self-improving AI falls short of that, the game is lost before it has begun.

-Chris





This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:56 MST