From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jun 04 2001 - 10:58:07 MDT
Mitchell J Porter wrote:
>
> But frankly I have trouble seeing how the *ends* (the goal condition,
> goal system, value system, method of ranking possible futures,...)
> can ever be regarded as intrinsically intelligent or intrinsically
> stupid. For such an architecture, intelligence is a property of
> the methods used to achieve the goals, not of the goals themselves.
Pedantry: I assume you mean that goals *within that architecture* have no
intelligence or stupidity, at least not perceptible to the system. For
more complex architectures such as our own, where the goals themselves are
the products of cognition, it can make perfect sense to regard goals as
"intelligent" or "stupid". In fact, if you specify enough of the
cognition in advance, intelligence may be the sole variable of concern.
If you take a *human* and send the intelligence to infinity, you may
inevitably wind up with an altruist or an egoist or whatever. But this is
because a *lot* of human cognition is specified in advance; in fact, human
cognition is *over*specified, almost supersaturatedly so. For a totally
arbitrary mind-in-general you can't even assume that the goals are
low-entropy, or that the system will reflectively decide "the goals are
low-entropy". (If the goals *are* low-entropy, and the system knows they
are low-entropy, then there are some behaviors that automatically fall out
of the goal system - if it's sufficiently intelligent, of course.)
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT