From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Aug 07 2001 - 11:30:53 MDT
Ben Goertzel wrote:
>
> However, I'd caution you that getting these things
>
> > 3. Symbol formation.
> > 4. Symbol structures (thoughts).
> > 5. Thought triggering and deliberation.
>
> to work in simple cases is a LONG LONG WAY off from getting them to work in
> more advanced cases. These general labels each refer to sets of problems of
> vastly varying difficulty. You can't assume that a mechanism adequate to
> yield a simple example of deliberation is going to be adequate to do
> real-life examples of deliberation. This warning is particularly necessary
> if one is thinking of a logical-rule-based system of some sort, because such
> systems typically scale very badly.... And scaling is not just an
> engineering issue, it's a logic-of-mind issue...
Well, I'm not thinking of a logical-rule-based system, nor indeed anything
remotely like it. I am fully aware that it is possible to build simple
mechanisms for simple cases that don't work for complex cases. The
milestone philosophy is tuned to building complex mechanisms, testing them
on simple cases, and then moving up to testing them on complex cases.
This, admittedly, presumes that you know, in advance, what you're doing.
It presumes the self-discipline not to code special cases. It presumes
that the standard mistakes of classical AI, such as the one you mention,
are not being made. This is a strong claim. The milestones I listed are
not optimized to convince an external observer that the claim is true; the
milestones are optimized to build a real AI at maximum speed. Along the
way it should be possible to collect marketing trophies, but that will
happen only in due time, and only when a sufficiently large base of
complexity exists. The milestones are optimized for the rapid prototyping
of a very complex system given a very complex theory. The more one
concentrates on building a real mind, and doing things right the first
time, the more complexity it takes to do simple things. It is better to
implement functionality with a thought than a fragment of code, but a
thought is vastly more work and requires a vastly larger base of
functioning subsystems. I see the greater danger as distorting the system
design to get quick results. So the trophies, as opposed to the
milestones, will be collected only in due time.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT