From: Ben Goertzel (firstname.lastname@example.org)
Date: Mon Sep 12 2005 - 15:02:37 MDT
> In my post on the relevance of complex systems, I set out the reasons
> why it is extremely questionable to assume that anyone can build a valid
> AGI by starting with the observation of logical reasoning at the
> extremely high level the we know it, then using this as the basis for
> the lowest level mechanisms of an AGI.
I don't think that logical reasoning can serve as the sole basis for an AGI
design, but I think it can serve as one of the primary bases.
I think that emergence and complex dynamics are necessary aspects of
intelligence given limited computational resources.
I think that it is quite feasible (and in fact a good idea) to give logic a
more primary role in an AGI than it has in humans. But that doesn't mean I
advocate GOFAI. It means I advocate AGI systems that intelligently couple
logical inference with complex, self-organizing dynamics.
In the human mind, arguably, abstract logical reasoning exists ONLY as a
high-level emergent phenomenon. However, I suggest that in an AGI system,
logical reasoning may exist BOTH as a low-level wired-in subsystem AND as a
high-level emergent phenomenon, and that these two aspects of logic in the
AGI system may be coordinated closely together.
Do you have an argument against this sort of approach? It is not based on
simulating human intelligence closely, but rather based on trying to combine
the best of human intelligence with the best of computer
technology/software -- with the aim of making an AGI that embodies
creativity and empathy and rationality superior to that of humans.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT