From: Ben Goertzel (firstname.lastname@example.org)
Date: Wed Dec 13 2000 - 16:45:51 MST
> Ben Goertzel, for philosophical reasons, may choose a design specifically
> tuned to give subgoals autonomy. In the absence of that design decision,
> I do not expect the problem to arise naturally.
I suspect that this design decision is a necessary one in order to
achieve intelligence. I can't prove this; I could make a strong argument
but don't have time to try now...
> So while the Minskyites might make problems for themselves, I can't see
> the when-subgoals-attack problem applying to either the CaTAI class of
> architectures, or to the transhuman level.
I don't understand the CaTAI architecture well enough to form a
counterargument. But in general, I think that if you can't forget goals
but remember the subgoals they spawned, you're going to have a hellacious
memory problem in your system. You can't assume infinite memory, as you'll
find out when you actually start building CaTAI...
This archive was generated by hypermail 2.1.5 : Sat May 25 2013 - 04:00:25 MDT