From: Phil Goetz (philgoetz@yahoo.com)
Date: Wed Aug 17 2005 - 09:26:22 MDT
--- Richard Loosemore <rpwl@lightlink.com> wrote:
> If a paperclip maximiser is not aware of such things as
> goals and
> motivations, it is not smart enough to be relevant, for
> the following
> sequence of reasons:
Richard - You're using the term "goal" in a sort of
common-sense way. Think of a goal rather as a goal that
is assigned to a rule-based system. The RBS seeks to
accomplish that goal. It does not reflect on whether
that goal is a worthy one.
The human equivalent of such a "goal" is not any of the
everyday "goals" we have such as sex, money, and power.
Jeff Hawkins' recent book /On Intelligence/, for instance,
would say that the "goal" is to maximize the accuracy
of our predictions. The top-level goal is unconscious,
not stored in declarative form, and not accessible to
reflection. It's more akin to a drive, like enjoying
good food. You don't sit around and wonder whether
the act of eating an ice-cream sundae is actually one
noble enough to merit the good feelings that you assign
to that act. You can't control that.
It is interesting that you can condition your response
to ice-cream sundaes, by, for instance, giving yourself
a shock every time you eat one. I heard about a guy
who connected a smoke-detector to an electrode to give
himself a shock every time he smoked. He ended up
addicted to both cigarettes and electrical shocks.
So it may be that our hypothetical AI can in some way
re-condition its motivations.
> c) A successful Seed AI will bootstrap and then eliminate
> rival projects
> quickly (except for case (d) below). After that, it will
> not allow
> experiments such as the construction of superintelligent
> paperclip
> maximisers.
That's the problem, not the answer.
> So, again: I am not anthropomorphising (accidentally
> attributing
> human-like qualities where they don't belong), but making
> the specific
> statement that a seed AI worth worrying about would be
> impossibly
> crippled if it did not have awareness of such design
> issues.
The problem is not that the seed AI is not aware of such
design issues. The problem is that the basic motivations
a creature has are not logical, are not a priori, and
maximizing paper clips is ultimately as reasonable as,
say, maximizing the number of sexual partners one has,
or maximizing the number of ice-cream sundaes one eats,
or some combination thereof. Many philosophers have
agreed that humans pursue pleasure and avoid pain and do
nothing else, and deriving pleasure from skin-to-skin
contact or from ingesting ice-cream sundaes is no more
rational than deriving pleasure from the production of
another paperclip.
A pure intellect with no irrational motivations would
do nothing at all. Even self-preservation is irrational.
- Phil Goetz
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT