From: Ben Goertzel (ben@goertzel.org)
Date: Thu Sep 08 2005 - 20:30:13 MDT
> > My conjecture is that any useful mechanism of hypothesis search inserted
> > specifically into the inference mechanism involved in attention
> allocation /
> > inference control, is going to introduce complex dynamics that
> render the
> > system extremely difficult to predict.
>
> That depends on *what* you're trying to predict. Let's say you're
> searching the inference space of axiomatic derivations in ZF set theory.
> No matter what kind of bizarre probabilistic searches you use to
> control your inferences, if the space is of legitimate derivations, you
> can remain confident that the result is a legitimate proof. This is the
> prediction that matters, even if you can't predict how exactly the proof
> will work.
True enough.
And one obvious question is then whether the Friendliness or otherwise of
the system is tractable to predict...
Your point about the predictability of provability is correct, but
provability is weak as an analogue to Friendliness.
Because, in the case of an AI system of which crisp logic is a subset,
provability is wired into the AI system; whereas it's not at all clear how
to wire something as nebulous as Friendliness into the axioms of an AI
system.
In this context, one point (which you have clearly already figured out, but
others on the list may not have) is:
* If Friendliness is an emergent property of the AI system (which is the
case with human friendliness, almost surely), then it is going to be
susceptible to the type of unpredictability I've mentioned above (and other
types too I would suppose)
* If Friendliness is somehow wired into the logic of the AI system at the
most fundamental level, so that each step of the system contains a
Friendliness-check, then it may be possible to make Friendliness predictable
in roughly the sense that provability is in your analogy
Another point, which you may or may not agree with, is:
* If Friendliness is to be achieved within feasible computational resources
(it will need to be emergent to at least a large extent, though not
necessarily completely)
So, my intuition is that
-- to be feasible, Friendliness must be emergent
-- to be predictable, Friendliness must be wired into the system logic
The problem of Friendly AI engineering then becomes one of creating a system
with Friendliness wired into the logic, but with a design tailored to cause
Friendliness to emerge. The coordination between the wired-in Friendliness
and the emergent Friendliness has got to allow the provability of
probabilistic theorems about the emergent Friendliness. Is this possible?
I really don't know...
There are deeper things to say in this vein, but then the conversation would
get a lot more technical and the emails would take a lot longer to write.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT