RE: Military Friendly AI

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Jun 28 2002 - 18:32:22 MDT


> This sounds like a realistic view of the problem. The system does have
> some basic friendliness implemented. But it isn't clear how best to
> implement the whole Friendliness systems and, thus, would be a waste of
> time at this point (since at best it is very likely be useless and, at
> worst, could bog down or throw off the entire system).
>
> The Friendliness goals are already in place, then?

Webmind had a simple Friendliness goal system in place, though it was never
tested (and it wasn't explicitly called a Friendliness goal system, it was
just called a goal system, one component of which was a desire to please
humans (to put it in very crude terms))

Novamente does not yet have a goal system at all, this will be implemented,
at my best guess, perhaps at the very end of 2002 or the start of 2003.
Currently we are just testing various cognitive and perceptual mechanisms,
and not yet experimenting with autonomous goal-directed behavior.

> >So, I think we don't even know how to build a good failsafe mechanism for
> >Novamente or any other AI yet. We will only know that when we
> know how to
> >measure the intelligence of an AGI effectively, and we will only
> know *this*
> >based on experimentation with AGI's smarter than the ones we have now.
>
> Well, getting a basic fail safe system in sooner should also help
> you learn
> how to do this better in the long-run.

A failsafe mechanism has two parts

1) a basic mechanism for halting the system and alerting appropriate people
when a "rapid rate of intelligence increase" is noted

2) a mechanism for detecting a rapid rate of intelligence increase

1 is easy; 2 is hard ... there are obvious things one can do, but since
we've never dealt with this kind of event before, it's entirely possible
that a "deceptive intelligence increase" could come upon us. Measuring
general intelligence is tricky.

> If anything even close to this looks
> likely you better be getting opinions of hundreds or thousands of
> relevant
> experts. Or I'll come kick yer ass. ;) Seriously.

Seriously -- this would be a tough situation.

What if one of these thousands of relevant experts decides the system is so
dangerous that they have to destroy it -- and me. What if they have allies
with the means to do so?

Emotions may run very high regarding such a situation....

> What happens if you get Novamenta working as an AI, it is proven that
> Friendliness can not be guaranteed and it looks like your design is
> somewhat more risky that the ideal system. Lets say your AI has a 4%
> chance (totally arbitrary, just for illustration) of turning out
> un-friendly if allowed to proceed. And a group of responsible
> experts (not
> crack pots, not government appointed, not-self interested parties, etc)
> strongly believe a different design could lower the risk to 3%. Lets say
> you'd have to scrap 70% of your code base and logic to implement
> the other
> design and it would take you several years to do this.
>
> What is the trade-off point between risk and time?

My own judgment would be, in your scenario, to spend 3 more years
engineering to lower the risk to 3%

However, I would probably judge NOT to spend 3 more years engineering to
lower the risk to 3.9% from 4%

These are really just intuitive judgments though -- to make them rigorous
would require estimating too many hard to estimate factors.

I don't think we're ever going to be able to estimate such things with that
degree of precision. I think the decisions will be more like a 1% risk
versus a 5% risk versus a 15% risk, say. And this sort of decision will be
easier to make...

> What if another team was further ahead on this other design than yours?

It depends on the situation. Of course, egoistic considerations of priority
are not a concern. But there's no point in delaying the Novamente-induced
Singularity by 3 years to reduce risk from 4% to 3%, if in the interim some
other AI team is going to induce a Singularity with a 33.456% risk...

> Another good reason why morality should not be decided by a single
> individual. Eliezer or Ben's morality may not allow death, thus severely
> going against Ben's wife's morals. Ben's wife's morals, however,
> would not
> prevent any deaths, and thus would go strongly against Eliezer's
> and Ben's
> (and mine). So maybe preventing deaths except where the individual does
> not want this protection is the best answer. But it takes more than one
> viewpoint to even see this questions.

In fact neither Eliezer nor I wishes to *force* immortality on anyone, via
uploading or medication or anything else.

We do have a personal difference, in that Eliezer seems emotionally
disturbed by the fact that some people *want* to die at the end of their
natural lifespan, whereas it really doesn't bother me much. Death is a
terrible thing, yet it adds a certain poignancy and spice to life, and in my
view the removal of death "by natural causes" is not an unalloyed plus --
though for me personally, the plusses outweigh the minuses, and for sure, I
plan on living as long as humanly or superhumanly possible, myself!!

Interestingly, in many conversations over the years I have found that more
women want to die after their natural lifespan has ended, whereas more men
are psyched about eternal life. I'm not sure if this anecdotal would hold
up statistically, but if so, it's interesting. Adding some meat to the idea
that women are more connected to Nature, I guess... ;)

-- ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT