From: Gary Miller (garymiller@starband.net)
Date: Mon Dec 09 2002 - 12:15:48 MST
Bill and my apologies to the mailing list for being off-topic...
On December 9th you said:
"Bill Gates may not be all that altruistic. Perhaps he is trying to
counteract the bad publicity of the M$ antitrust case. His anti-AIDS
campaign is wonderful, but it is interesting that it is targeted at
India where there are many talented programmers, rather than Africa
where there are not so many programmers."
I can appreciate your cynicism in this day and age. But since it's
inception in 1994 the Bill and Melissa Gates foundation is responsible
for over 2.5 billion in global health program grants! Even based on
stock values before the economic downturn this is a sizable percentage
of his total net worth! Show me what the next 5 richest people in the
world have given back to the world in this same time period!
Infectious Disease and Vaccines $1,342,508,667
Reproductive and Health Care $589,170,701
HIV/AIDS and TB $538,543,383
Other $40,307,826
Emergency Relief $10,350,000
Total $2,520,880,577
The choice of of India over Africa or any other country was I'm sure
complex. When you give money away you have to make sure that the
country or program accepting is going to make maximal usage of the money
with a minimum diverted to arms, corrupt government, etc... You also
have to look at the other disease and starvation rates to insure you are
not just prolonging the misery to a population who is being ravaged by
other problems which are beyond the scope of what you are capable of
doing.
I know we in America like to root for the underdog and pick apart the
biggest and the richest. Perhaps from jealousy or perhaps it's just our
nature. Our children at the same time idolize sports heroes, rap and
rock stars, and movie stars who sometimes go out of their way to
glamorize hard drugs, violence, and self-loathing. Wouldn't it be
better if they emulated someone who was successful beyond belief and
gave something back to the world.
Yes OpenSource softwarre is free to the world but it will never
vaccinate one child, or save one family from watching their children
die. It may not be that farfetched that the next time you tell someone
that Microsoft is stealing their money and to use LINUX instead some
child may go unvaccinated somewhere in the world. I may be in denial
but everytime I send money to Microsoft I try to believe that a part of
that money goes to help someone somewhere in the world.
-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Bill
Hibbard
Sent: Monday, December 09, 2002 12:55 PM
To: sl4@sl4.org
Subject: RE: Uploading with current technology
Hi Gary,
On Mon, 9 Dec 2002, Gary Miller wrote:
> Hi Bill,
>
> On Sun, 9 Dec 2002, Bill Hibbard wrote:
>
> >> Furthermore, behaviors learned via their old greedy or xenophobic
> values would be negatively
> >> reinforced and disappear.
>
> How do you give negative reinforcement to someone who has succeeded so
> far beyond the average man that they are both spiritually, emotionally
> and physically untouchable?
Reinforcement values can be built into the basic learning architecture
of a brain. The communist experiments of the twentieth century
demonstrated the difficulty of changing basic human values.
> Obsessive fear of losing what one has already worked so hard to
> achieve is one of the drivers for achieving ever increasing power and
> wealth. Perhaps it is the recognition and fear of one's eventual
> mortality today that encourages the very rich to share the wealth
> through philanthropy and to invest in their afterlife so to speak.
> Once a person has reached this level of success and power, I would
> defy anyone to reeducate them to the fact that giving a large portion
> of their money away is the optimal way to further their own
> self-interests especially if their life-spans were hugely extended.
This just seconds what I said in my message: socially
imposed values can be easily over-powered by the innate
values of human brains, in humans with the power to ignore social
pressure.
Thus to insure human safety in a world populated by super- intelligent
machines or humans, the basic (hard-wired) reinforcement learning values
of super-intelligent brains must be the happiness of all humans.
> We live in a day again where the middle class is being eroded from the
> top and bottom. The rich do get richer and the poor are becoming more
> numerous. I have a tremendous respect for people like Bill Gates who
> are spending large amounts of their money in this life to improve
> living conditions in so many parts of the world. I would pray to see
> this become the norm instead of the exception. But unfortunately too
> many billionaires still operate under the philosophy that "whoever
> dies with the most toys (or billions) wins the game".
Bill Gates may not be all that altruistic. Perhaps he is trying to
counteract the bad publicity of the M$ antitrust case. His anti-AIDS
campaign is wonderful, but it is interesting that it is targeted at
India where there are many talented programmers, rather than Africa
where there are not so many programmers.
Cheers,
Bill
> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Bill
> Hibbard
> Sent: Monday, December 09, 2002 10:00 AM
> To: sl4@sl4.org
> Subject: Re: Uploading with current technology
>
>
>
> Hi Gordon,
>
> On Sun, 8 Dec 2002, Gordon Worley wrote:
>
> > On Sunday, December 8, 2002, at 01:08 PM, Ben Goertzel wrote:
> >
> > > http://users.rcn.com/standley/AI/immortality.htm
> > >
> > > Thoughts?
> > >
> > > Can anyone with more neuro expertise tell me: Is this guy correct
> > > as
>
> > > regards what is currently technologically plausible?
> >
> > The Singularity and, specifically, FAI is a faster, safer way of
> > transcending. Super *human* intelligence is highly dangerous.
> > Think male chimp with nuclear feces. Unless you've got someone way
> > protect the universe from the super *humans*, we're probably better
> > off with our current brains.
>
> I largely agree. But as I point out in my book:
>
> http://www.ssec.wisc.edu/~billh/super.html
>
> after humans meet super-intelligent machines they will want to become
> super-intelligent themselves, and will want the indfinite life span of
> a repairable machine brain supporting their mind.
>
> With super-intelligent machines, the key to human safety is in
> controlling the values that reinforce learning of intelligent
> behaviors. In machines, we can design them so their behaviors are
> positively reinforced by human happiness and negatively reinforced by
> human unhappiness.
>
> Behaviors are reinforced by much different values in human brains.
> Human values are mostly self-interest. As social animals humans have
> some more altruistic values, but these mostly depend on social
> pressure. Very powerful humans can transcend social pressure and
> revert to their selfish values, hence the maxim that power corrupts
> and absolute power corrupts absolutely. Nothing will give a human more
> power than super-intelligence.
>
> Society has a gradual (lots of short-term setbacks, to be
> sure) long-term trend toward equality because human brains are
> distributed quite democratically: the largest IQ (not a perfect
> measure, but widely applied) in history is only twice the average.
> However, the largest computers, buildings, trucks, etc are thousands
> of times their averages. The migration of human minds into machine
> brains theatens to end the even distribution of human intelligence,
> and hence end the gradual long-term trend toward social equality.
>
> Given that the combination of super-intelligence and human values is
> dangerous, the solution is to make alteration of reinforcement
> learning values a necessary condition for granting a human
> super-intelligence. That is, when we have the technology to manipulate
> human intelligence then we also need to develop the technology to
> manipulate human reinforcement learning values. Because this change in
> values would affect learning, it would not immediately change the
> human's old behaviors. Hence they would still "be themselves". But as
> they learned super-intelligent behaviors, their new values would cause
> those newly learned behaviors to serve the happiness of all humans.
> Furthermore, behaviors learned via their old greedy or xenophobic
> values would be negatively reinforced and disappear.
>
> One danger is the temptation to use genetic manipulation as a shortcut
> to super-intelligent humans. This may provide a way to increase human
> intelligence before we understand how it works and before we know how
> to change human reinforcement learning values. This danger is neatly
> parallel with Mary Shelley's Frankestein, in which a human monster is
> created by a scientist tinkering with technology thet he did not
> really understand. We need to understand how human brains work and
> solve the AGI problem before we start manipulating human brains.
>
> Cheers,
> Bill
> ----------------------------------------------------------
> Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
> test@doll.ssec.wisc.edu 608-263-4427 fax: 608-263-6738
> http://www.ssec.wisc.edu/~billh/vis.html
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT