From: Bill Hibbard (test@doll.ssec.wisc.edu)
Date: Mon Dec 09 2002 - 08:00:08 MST
Hi Gordon,
On Sun, 8 Dec 2002, Gordon Worley wrote:
> On Sunday, December 8, 2002, at 01:08 PM, Ben Goertzel wrote:
>
> > http://users.rcn.com/standley/AI/immortality.htm
> >
> > Thoughts?
> >
> > Can anyone with more neuro expertise tell me: Is this guy correct as
> > regards
> > what is currently technologically plausible?
>
> The Singularity and, specifically, FAI is a faster, safer way of
> transcending. Super *human* intelligence is highly dangerous. Think
> male chimp with nuclear feces. Unless you've got someone way protect
> the universe from the super *humans*, we're probably better off with
> our current brains.
I largely agree. But as I point out in my book:
http://www.ssec.wisc.edu/~billh/super.html
after humans meet super-intelligent machines they will want
to become super-intelligent themselves, and will want the
indfinite life span of a repairable machine brain supporting
their mind.
With super-intelligent machines, the key to human safety is
in controlling the values that reinforce learning of
intelligent behaviors. In machines, we can design them so
their behaviors are positively reinforced by human happiness
and negatively reinforced by human unhappiness.
Behaviors are reinforced by much different values in human
brains. Human values are mostly self-interest. As social
animals humans have some more altruistic values, but these
mostly depend on social pressure. Very powerful humans can
transcend social pressure and revert to their selfish values,
hence the maxim that power corrupts and absolute power
corrupts absolutely. Nothing will give a human more power
than super-intelligence.
Society has a gradual (lots of short-term setbacks, to be
sure) long-term trend toward equality because human brains
are distributed quite democratically: the largest IQ (not
a perfect measure, but widely applied) in history is only
twice the average. However, the largest computers, buildings,
trucks, etc are thousands of times their averages. The
migration of human minds into machine brains theatens to
end the even distribution of human intelligence, and hence
end the gradual long-term trend toward social equality.
Given that the combination of super-intelligence and human
values is dangerous, the solution is to make alteration of
reinforcement learning values a necessary condition for
granting a human super-intelligence. That is, when we have
the technology to manipulate human intelligence then we
also need to develop the technology to manipulate human
reinforcement learning values. Because this change in values
would affect learning, it would not immediately change the
human's old behaviors. Hence they would still "be themselves".
But as they learned super-intelligent behaviors, their new
values would cause those newly learned behaviors to serve
the happiness of all humans. Furthermore, behaviors learned
via their old greedy or xenophobic values would be negatively
reinforced and disappear.
One danger is the temptation to use genetic manipulation as a
shortcut to super-intelligent humans. This may provide a way
to increase human intelligence before we understand how it
works and before we know how to change human reinforcement
learning values. This danger is neatly parallel with Mary
Shelley's Frankestein, in which a human monster is created by
a scientist tinkering with technology thet he did not really
understand. We need to understand how human brains work and
solve the AGI problem before we start manipulating human brains.
Cheers,
Bill
----------------------------------------------------------
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
test@doll.ssec.wisc.edu 608-263-4427 fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT