Re: What are "AGI-first'ers" expecting AGI will teach us about FAI?

From: Vladimir Nesov (robotact@gmail.com)
Date: Sun Apr 13 2008 - 09:47:18 MDT


I think that the right kind of investigation of Friendliness should in
fact help in AGI, not the other way around. It should be able to
formulate the problem that actually needs solving, in a form that can
be addressed. The main question of Friendliness theory is how to build
a system that significantly helps us (which results in great changes
to us and the world), while at the same time (with the goal of)
preserving and developing things that we care about.

This role looks very much like what intelligence should do in general.
Currently, intelligence enables simple drives inbuilt in our biology
to have their way in situations that they can never comprehend and
which were not around at the time they were programmed in by
evolution. Intelligence empowers these drives, allows them to deal
with many novel situations and solve problems which they can't on
their own, while carrying on what their intention was.

This process is not perfect, so in modern environment some of purposes
don't play out. People eat wrong foods and become ill, or decide not
to have many children. Propagation of DNA is no longer a very
significant goal for humans. This is an example of subtly Unfriendly
AI, the kind that Friendliness-blind AGI development can end up
supplying: it works great at first and *seems* to follow its intended
goals very reliably, but in the end it has all the control and starts
to ignore initial purpose.

Grasping the principles by which modification to a system results in a
different dynamics that can be said to preserve intention of initial
dynamics, while obviously altering the way it operates, can, I think,
be a key to general intelligence. If this intention-preserving
modification process is expressed on low level, it doesn't need to
have higher-level anthropic concepts engraved on its circuits, it even
doesn't need to know about humans. It can be a *simple* statistical
creature. All it needs is to extrapolate the development of our corner
of the universe, where humans are the main statistical anomaly. It
will automatically figure out what does it mean to be Friendly, if
such is its nature.

-- 
Vladimir Nesov
robotact@gmail.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT