From: Ben Goertzel (firstname.lastname@example.org)
Date: Fri Dec 13 2002 - 04:36:39 MST
Eliezer wrote (responding to Bill Hibbard):
> > A robust implementation of reinforcement learning must solve
> > the temporal credit assignment problem, which requires a
> > simulation model of the world. This simulation model is the
> > basis of reasoning based on goals. Planning and goal-based
> > reasoning are emergent behaviors of a robust implementation
> > of reinforcement learning.
> Perhaps the complex behavior of planning is emergent in the simple
> behavior of reinforcement, as well as the simple behavior of
> being a special case of the complex behavior of planning. I don't think
> so, but then I haven't tried to figure out how to do it, so I wouldn't
> know whether it's possible.
Here is my view on the relationship btw planning & reinforcement learning...
Firstly, I think that reinforcement learning, if given enough time & space
to work in, would eventually emergently give rise to planning. (This could
probably be shown mathematically, actually, though estimates on the time &
space required would be hideously exponential. This relates to Hutter and
Schmidhuber's work on algorithm information theory based AGI)
Secondly, I think that in order to make it work in a reasonable
resource-usage framework, reinforcement learning has got to be modified so
that it basically emulates some sort of probabilistic planning framework.
I.e., the solution to the temporal credit assignment problem requires not
just a simulation of the world but inference & planning based on this
Finally, I think that any planning process that is going to be effective is
going to have to be responsive to feedback in roughly the manner of
Ultimately, just like "symbolic" and "subsymbolic", "reinforcement learning"
and "planning" are crudely drawn categories of an early stage of
mind-theory, which are not nearly as distinct as they seem to most theorists
> How would a robust implementation of
> reinforcement learning duplicate the moral and metamoral
> adaptations which
> are the result of highly specific selection pressures in an ancestral
> environment not shared by AIs?
I don't think it is necessary or even desirable for AI morality to duplicate
human morality. (Or, to use your phrasing, human "moral and metamoral
For one thing, I do not have that much faith in the stability of any human's
moral system, not even that of a so-called "true altruist." I would rather
have the moral system of an AI be significantly more reliable!!!
> You can transfer moral complexity
> rather than trying to reduplicate its evolutionary causation in humans,
Hmmm. Well, clearly this transfer can be done via uploading human minds...
Whether humans' particular brand of moral complexity can be transferred to a
digital mind that is very nonhuman in character, is a different issue. It's
not so clear to me that this can be done effectively.
> but you do have to transfer that complexity - it is not emergent
> just from
Really, I think that any state one desires to see in an AI system, can in
principle be achieved thru reinforcement learning. That is, reinforcement
learning is a completely able optimization algorithm, given enough time and
space. The problem is, it's terribly terribly slow.
Bill says this will be solved by solving the temporal credit assignment
problem, and I say the solution to that may involve *explicit* use of
planning and inference, as well as the emergence of *further* aspects of
planning and inference.
> I confess that I don't see how this changes anything at all. I assumed a
> simulation model that is not only used for temporal credit
> assignment, but
> which allows for imagination of novel behaviors whose desirability is
> determined by the match of their extrapolated effects against previously
> reinforced goal patterns.
Right. And I believe this is actually NEEDED for temporal credit
The temporal credit assignment problem is so hard that it can't be solved
within plausible time-constraints, without imagination, comparison of
imagination to memory, the whole beast called mind...
Bill, I think that computationally-plausible temporal credit assignment is a
bigger can of worms than you're realizing ;-)
There isn't going to be a clever algorithm for solving it. The solution is
a well-tuned, integrative mind with many subtle aspects that don't look
explicitly like reinforcement learning...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT