From: Samantha Atkins (sjatkins@gmail.com)
Date: Sat Apr 26 2008 - 02:12:41 MDT
Vladimir Nesov wrote:
> I think that the right kind of investigation of Friendliness should in
> fact help in AGI, not the other way around. It should be able to
> formulate the problem that actually needs solving, in a form that can
> be addressed. The main question of Friendliness theory is how to build
> a system that significantly helps us (which results in great changes
> to us and the world), while at the same time (with the goal of)
> preserving and developing things that we care about.
>
Since we are quite aware of how limited our intelligence is and how
tangled and suspect the roots of our values are, I am not able to be
sure that "things that we care about" are the best a superior mind can
come up with or should unduly limit it. I am not at all sure that even
a super efficient direct extrapolation from the kinds of beings we are
leads to the very best that can be for us much less to the best state
for the local universe.
> This role looks very much like what intelligence should do in general.
> Currently, intelligence enables simple drives inbuilt in our biology
> to have their way in situations that they can never comprehend and
> which were not around at the time they were programmed in by
> evolution. Intelligence empowers these drives, allows them to deal
> with many novel situations and solve problems which they can't on
> their own, while carrying on what their intention was.
>
Intelligence to some degree goes beyond those drives, sees the limits of
their utility and what may be better. If we are to 'become as gods'
then we must at some point somehow go beyond our evolutionary
psychology. A psychological ape will not enjoy being an upload except
in a carefully crafted virtual monkey house. A psychological ape will
not even enjoy an indefinitely long life of apish pleasures in countless
variations. At some point we are more and more not as our EP says.
> This process is not perfect, so in modern environment some of purposes
> don't play out. People eat wrong foods and become ill, or decide not
> to have many children. Propagation of DNA is no longer a very
> significant goal for humans. This is an example of subtly Unfriendly
> AI, the kind that Friendliness-blind AGI development can end up
> supplying: it works great at first and *seems* to follow its intended
> goals very reliably, but in the end it has all the control and starts
> to ignore initial purpose.
>
Are you saying that good AGI must keep us being happy little breeders?
Purpose evolves or it is a dead endless circle.
> Grasping the principles by which modification to a system results in a
> different dynamics that can be said to preserve intention of initial
> dynamics, while obviously altering the way it operates, can, I think,
> be a key to general intelligence. If this intention-preserving
> modification process is expressed on low level, it doesn't need to
> have higher-level anthropic concepts engraved on its circuits, it even
> doesn't need to know about humans. It can be a *simple* statistical
> creature. All it needs is to extrapolate the development of our corner
> of the universe, where humans are the main statistical anomaly. It
> will automatically figure out what does it mean to be Friendly, if
> such is its nature.
>
>
I don't for a moment believe that the wise guiding of the development
and evolution of the human species will or can be achieved by some
automated statistical process. To me that is much more dangerously
un-sane a notion that simply developing actual AGI as quickly as
possible because we need the intelligence NOW.
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT