From: Phil Goetz (firstname.lastname@example.org)
Date: Tue Sep 13 2005 - 09:38:41 MDT
--- Chris Capel <email@example.com> wrote:
> On 9/12/05, Richard Loosemore <firstname.lastname@example.org> wrote:
> > Lastly, you say: "However, I suggest that in an AGI system,
> > reasoning may exist BOTH as a low-level wired-in subsystem AND as a
> > high-level emergent phenomenon, and that these two aspects of logic
> > the AGI system may be coordinated closely together." If it really
> > that, it would (as I understand it) be quite a surprise (to put it
> > mildly) ... CAS systems do not as a rule show that kind of weird
> > reflection, as I said in my earlier posts.
> I'm not sure I can reconcile these two opinions. If you think it's
> feasible to use some sort of logical reasoning, (whether rational
> probability analysis or something else,) as part of the basic
> substrate of a generally intelligent system, and given that any
> successful AI project would necessarily result with a system that
> *does* exhibit logical reasoning at a high level, how could you find
> it unlikely that a system would combine both features? I probably
> misunderstand you.
I don't think (speaking for Richard :) he's objecting to the two
being in one system. I think he's objecting to the notion that
features of the lower-level logic will be mirrored in the higher-level
logic. I agree; regarding an interaction like that:
- it would be unlikely to appear by chance
- if it did appear by chance, it would suggest
that you hadn't separated levels in your design properly
- if you designed it in, it would make your design a rotten
design from an engineering standpoint.
- Phil Goetz
Yahoo! for Good
Donate to the Hurricane Katrina relief effort.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT