From: Richard Loosemore (rpwl@lightlink.com)
Date: Sat Aug 26 2006 - 20:38:33 MDT
[begin part 8]
************************************************************
* *
* The Complex Systems Critique (Again) *
* *
************************************************************
Another way to say what I have been trying to say:
The question is: how does the *design* of a cognitive system's learning
mechanisms interact with the *design* of its "thinking and reasoning and
knowledge representation" mechanism?
Can you, for example, sort out the thinking/reasoning/knowledge
representation mechanism first, then go back and find some good learning
mechanisms that will fill that mechanism with the right sort of data,
using only real world interaction and (virtually) no hand-holding from
the experimenter?
Or is it the case that you can pick a thinking/reasoning/knowledge
representation mechanism of your choice, and then discover to your
horror that there is not ever going to be a learning mechanism that
feeds that mechanism properly? Could it be that the two are so
interrelated that a wrong choice of one precludes ANY choice of the other?
Now, complex adaptive systems theory would seem to indicate that if the
learning mechanisms are powerful enough to make the system Class IV
(i.e. complex-adaptive), the global behavior of those learning
mechanisms is going to be disconnected from the local behavior .... you
can't pick a global behavior first and then pick a local mechanism that
generates that behavior. That is the disconnect.
If this were the case with cognitive systems, we would get the situation
we have now. And one way out would be to build the kind of development
environement and adopt the kind of research strategy I have talked about.
Richard Loosemore.
************************************************************
* *
* Goals Systems and Semantics of Goal Statements *
* *
************************************************************
Michael Vassar wrote:
> Some posters seem to be very seriously unaware of what
> was said in CAFAI, but having read and understood it
> should be a prerequisite to posting here.
> My complaints
> Friendly AIs are explicitly NOT prevented from messing
> with their source-code or with their goal systems.
> However, they act according to decision theory. ....
^^^^^^^^^^^^^^^
I have to go on record here as saying that I (and others who are poorly
represented on this list) fundamentally disagree with this statement. I
would not want readers of these posts to get the idea that this is THE
universally agreed way to build an artificial intelligence. Moreover,
many of the recent debates on this list are utterly dependent on the
assumption that you state above, so to people like me these debates are
just wheel-spinning built on nonsensical premises.
Here is why.
Friendly AIs built on decision theory have goal systems that specify
their goals: but in what form are the goals represented, and how are
they interpreted? Here is a nice example of a goal:
"Put the blue block on top of the red block"
In a Blocks World, the semantics of this goal - its "meaning" - are not
at all difficult. All fine and good: standard 1970's-issue artificial
intelligence, etc.
But what happens when the goals become more abstract:
"Maximize the utility function, where the utility function specifies
that thinking is good"
I've deliberately chosen a silly UF (thinking is good) because people on
this list frequently talk as if a goal like that has a meaning that is
just as transparent as the meaning of "put the blue block on top of the
red block". The semantics of "thinking is good" is clearly not trivial,
and in fact it is by no means obvious that the phrase can be given a
clear enough semantics to enable it to be used as a sensible input to a
decision-theory-driven AGI.
The behavior of an AGI with such a goal would depend crucially on what
mechanisms it used to interpret the meaning of "thinking is good". So
much so, in fact, that it becomes stupid to talk of the system as being
governed by the decision theory component: it is not, it is governed by
whatever mechanisms you can cobble together to interpret that vague goal
statement. What initially looked like the dog's tail (the mechanisms
that govern the interpretation of goals) starts to wag the dog (the
decision-theory-based goal engine).
The standard response to this criticism is that while the semantics are
not obvious, the whole point of modern AI research is to build systems
that do rigorously interpret the semantics in some kind of compositional
way, even in the cases of abstract goals like "thinking is good". In
other words, the claim is that I am seeing a fundamental problem where
others only see a bunch of complex implementation details.
This is infuriating nonsense: there are many people out there who
utterly disagree with this position, and who have solid reasons for
doing so. I am one of them.
So when you say "Friendly AIs [...] act according to decision theory."
you mean "The particular interpretation of how to build a Friendly AI
that is common on this list, acts according to decision theory."
And, as I say, much of the recent discussion about passive AI and goal
systems is just content-free speculation, from my point of view.
Richard Loosemore
[end part 8]
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT