RE: Knowability of Friendly AI (was: ethics of argument)

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Nov 11 2002 - 09:51:05 MST


Eliezer wrote:
> > Although I've spent much of my life creating heuristic
> conceptual arguments
> > about topics of interest, the three forms of knowledge I trust most are:
> >
> > -- mathematical
> > -- empirical
> > -- experiential [when referring to subjective domains]
>
> It looks to me like you're missing out on the entire domain of
> nonmathematical abstract reasoning, e.g. by combining
> generalizations from
> previous empirical experience, or by nonmathematical reasoning about the
> properties of abstract systems that are simple enough to be modeled
> deductively.

I am not missing out on this domain, I just trust it less than the three I
mentioned.

I spent most of my time doing nonmathematical abstract reasoning, actually.

> You are an individual, you can always go off and build AI regardless of
> what the heuristic arguments say, but right now the heuristic arguments
> say that Novamente as it stands, if it works, will destroy the world,

Right now YOUR heuristic arguments say that...

... and, MINE say otherwise ;-)

> Saying "I distrust all heuristic arguments" doesn't really cut it here.

Of course I don't distrust all heuristic arguments equally, and I didn't
mean to imply that I did.

> One, you don't distrust your *own* heuristic arguments,

I trust my own heuristic arguments less than my own mathematical, empirical
or experiential evidence.

I trust my own newborn heuristic arguments only slightly; but if an
heuristic argument of mine has been around a while and I haven't been
convinced of a flaw in it, then yeah, I trust it somewhat. My research is
guided by my heuristic arguments, after all.

> It seems to me
> that your claim that nothing can be known about Friendly AI in advance
> would be, if it were true, a strong (though not knockdown) argument
> against developing AI in the first place.

A better statement of my perspective would be: "The amount that can be
confidently known about Friendly AI in advance of having real AGI's to
experiment with, or a huge mathematical advance, is very small."

> Would it be fair to summarize your argument so far as: "Novamente is a
> good Singularity project because nothing useful can be known about
> Friendly AI in advance, which unknowability is itself knowable on the
> grounds that Friendly AI is neither empirically demonstrated nor
> mathematically proven knowledge. It is correct Singularity strategy to
> invest in AI projects when nothing is known about Friendly AI, since the
> only way to find out is to try it. The amount of concern I've shown for
> Friendly AI so far is around the right proportional amount of concern
> desired in the leader of a Singularity AI project."

No, that's a somewhat twisted summary of my argument ;)

The last sentence is one I agree with; but the previous sentences I do not
agree with as stated. They all seem like unconscious or conscious attempts
to "spin" my perspective in an unfavorable way.

For instance, when you say

>It is correct Singularity strategy to
> invest in AI projects when nothing is known about Friendly AI, since the
> only way to find out is to try it.

I'd say, rather:

"It is correct Singularity strategy to invest in AGI project when very
little is known about Friendly AI, since serious knowledge about Friendly AI
is only going to evolve organically along with practical knowledge about
building and teaching AGI's."

> Are you even going
> to *have*
> a Friendliness Failure Lab?

We will have a Novababy testing lab, and Friendliness will be one among many
things tested there.

As I stated on this list a couple months ago, I will write & post a plan for
Friendliness testing and other aspects of Novababy testing, after completing
the current draft of the Novamente book. Looks like a few more months.

> I
> think - as you seem to deny - that it's possible to take a *lot* of
> territory on the Friendly AI part of Singularity strategy, over and above
> that represented by a generic recursively self-improving AI project.

Yes, this is a significant area of disagreement between us.

Also, I know I'm not alone in my views on this, within the small group of
Singularity-focused AGI researchers. Peter Voss, for instance, has
expressed very similar views to mine on this topic, in the past, on this
list. He has said that he feels it's just "too early" to be thinking in
such detail about Friendly AI; but he is spending his time trying to
actually build an AGI capable of launching the Singularity.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT