RE: Uploading with current technology

From: Bill Hibbard (test@doll.ssec.wisc.edu)
Date: Mon Dec 09 2002 - 10:54:50 MST


Hi Gary,

On Mon, 9 Dec 2002, Gary Miller wrote:

> Hi Bill,
>
> On Sun, 9 Dec 2002, Bill Hibbard wrote:
>
> >> Furthermore, behaviors learned via their old greedy or xenophobic
> values would be negatively
> >> reinforced and disappear.
>
> How do you give negative reinforcement to someone who has succeeded so
> far beyond the average man that they are both spiritually, emotionally
> and physically untouchable?

Reinforcement values can be built into the basic learning
architecture of a brain. The communist experiments of the
twentieth century demonstrated the difficulty of changing
basic human values.

> Obsessive fear of losing what one has already worked so hard to achieve
> is one of the drivers for achieving ever increasing power and wealth.
> Perhaps it is the recognition and fear of one's eventual mortality today
> that encourages the very rich to share the wealth through philanthropy
> and to invest in their afterlife so to speak. Once a person has reached
> this level of success and power, I would defy anyone to reeducate them
> to the fact that giving a large portion of their money away is the
> optimal way to further their own self-interests especially if their
> life-spans were hugely extended.

This just seconds what I said in my message: socially
imposed values can be easily over-powered by the innate
values of human brains, in humans with the power to ignore
social pressure.

Thus to insure human safety in a world populated by super-
intelligent machines or humans, the basic (hard-wired)
reinforcement learning values of super-intelligent brains
must be the happiness of all humans.

> We live in a day again where the middle class is being eroded from the
> top and bottom. The rich do get richer and the poor are becoming more
> numerous. I have a tremendous respect for people like Bill Gates who
> are spending large amounts of their money in this life to improve living
> conditions in so many parts of the world. I would pray to see this
> become the norm instead of the exception. But unfortunately too many
> billionaires still operate under the philosophy that "whoever dies with
> the most toys (or billions) wins the game".

Bill Gates may not be all that altruistic. Perhaps he is trying
to counteract the bad publicity of the M$ antitrust case. His
anti-AIDS campaign is wonderful, but it is interesting that it is
targeted at India where there are many talented programmers, rather
than Africa where there are not so many programmers.

Cheers,
Bill

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Bill
> Hibbard
> Sent: Monday, December 09, 2002 10:00 AM
> To: sl4@sl4.org
> Subject: Re: Uploading with current technology
>
>
>
> Hi Gordon,
>
> On Sun, 8 Dec 2002, Gordon Worley wrote:
>
> > On Sunday, December 8, 2002, at 01:08 PM, Ben Goertzel wrote:
> >
> > > http://users.rcn.com/standley/AI/immortality.htm
> > >
> > > Thoughts?
> > >
> > > Can anyone with more neuro expertise tell me: Is this guy correct as
>
> > > regards what is currently technologically plausible?
> >
> > The Singularity and, specifically, FAI is a faster, safer way of
> > transcending. Super *human* intelligence is highly dangerous. Think
> > male chimp with nuclear feces. Unless you've got someone way protect
> > the universe from the super *humans*, we're probably better off with
> > our current brains.
>
> I largely agree. But as I point out in my book:
>
> http://www.ssec.wisc.edu/~billh/super.html
>
> after humans meet super-intelligent machines they will want
> to become super-intelligent themselves, and will want the indfinite life
> span of a repairable machine brain supporting their mind.
>
> With super-intelligent machines, the key to human safety is
> in controlling the values that reinforce learning of intelligent
> behaviors. In machines, we can design them so their behaviors are
> positively reinforced by human happiness and negatively reinforced by
> human unhappiness.
>
> Behaviors are reinforced by much different values in human brains. Human
> values are mostly self-interest. As social animals humans have some more
> altruistic values, but these mostly depend on social pressure. Very
> powerful humans can transcend social pressure and revert to their
> selfish values, hence the maxim that power corrupts and absolute power
> corrupts absolutely. Nothing will give a human more power than
> super-intelligence.
>
> Society has a gradual (lots of short-term setbacks, to be
> sure) long-term trend toward equality because human brains
> are distributed quite democratically: the largest IQ (not
> a perfect measure, but widely applied) in history is only
> twice the average. However, the largest computers, buildings, trucks,
> etc are thousands of times their averages. The migration of human minds
> into machine brains theatens to end the even distribution of human
> intelligence, and hence end the gradual long-term trend toward social
> equality.
>
> Given that the combination of super-intelligence and human values is
> dangerous, the solution is to make alteration of reinforcement learning
> values a necessary condition for granting a human super-intelligence.
> That is, when we have the technology to manipulate human intelligence
> then we also need to develop the technology to manipulate human
> reinforcement learning values. Because this change in values would
> affect learning, it would not immediately change the human's old
> behaviors. Hence they would still "be themselves". But as they learned
> super-intelligent behaviors, their new values would cause those newly
> learned behaviors to serve the happiness of all humans. Furthermore,
> behaviors learned via their old greedy or xenophobic values would be
> negatively reinforced and disappear.
>
> One danger is the temptation to use genetic manipulation as a shortcut
> to super-intelligent humans. This may provide a way to increase human
> intelligence before we understand how it works and before we know how to
> change human reinforcement learning values. This danger is neatly
> parallel with Mary Shelley's Frankestein, in which a human monster is
> created by a scientist tinkering with technology thet he did not really
> understand. We need to understand how human brains work and solve the
> AGI problem before we start manipulating human brains.
>
> Cheers,
> Bill
> ----------------------------------------------------------
> Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
> test@doll.ssec.wisc.edu 608-263-4427 fax: 608-263-6738
> http://www.ssec.wisc.edu/~billh/vis.html
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT