From: Nick Hay (firstname.lastname@example.org)
Date: Fri Jul 11 2003 - 02:01:13 MDT
Philip Sutton wrote:
> The cost per unit will fall over the years but the demand for
> computational power will grow rapidly - faster?? - so expanding mind
> power is likely to still be an expensive proposition.
Well, expensive until the AGI works out how to implement molecular
nanotechnology, for instance. I have a feeling a linear increase the
computational power will give you an exponential rise in intelligence, like
the change from chimpanzees to humans.
> So it's out to the saltmines for the young AGI to earn some money to
> support its habit - expanding mind power.
With sufficent intelligence it's not necessary to play the human economic
game. The extra computational power is going towards solving the problem,
unlike in the case of a drug addict.
> How do we avoid getting a generation of AGIs that will do whatever
> earns the most money to expand their minds? If Dubya or some Mafia
> boss or an arms manufacture or a drug company or....pay the highest
> why wouldn't the young AGI go along with it?
To what end is the AGI collecting computational power? Given an AGI with a
supergoal of Friendliness (where killing people generally isn't desirable),
although computational power is useful (allows it to better work out how to
help), it's not useful at all costs. A necessary set to avoid such a
wireheading failure (computational power isn't valuable at all costs to a
humane mind) is to have Friendliness as the supergoal from which value is
Basically, given Friendliness the AGI can make the tradeoffs we would. Perhaps
arms running is the best solution in some contexts; perhaps it's bad in all
contexts. In either case when arms running is obviously bad to humans like
us, a Friendly AGI can likewise see selling arms as being unFriendly, even
if, all other things being equal, getting money is Friendly.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT