RE: economic effects of AI (was RE: About that E-mail:...)

From: Ben Goertzel (ben@intelligenesis.net)
Date: Sun Oct 01 2000 - 06:50:38 MDT


> In trying to think through the transition to this stage of "low-cost"
> human-to-human service, however, I keep getting stuck at the following
> points:
>
> 1) How will the wealth generated by AI be distributed? We don't
> want to fall
> into a system with the negative incentives of socialism. But we would
> endanger social stability if, say, 1% of the population controlled 99% of
> the wealth because they own the AIs. (An SI would probably not submit to
> being owned!)

I hate to use M$ as a positive example, but here goes....

M$ makes money from the OS. But many many people make money from
application
development on top of their OS.

Similarly, some people will make money from the "core" aspects of AI
engineering,
and others will make money from customizing particular specialized aspects
of
Internet AI systems for specific purposes....

> 2) How many "lotus eaters" can a society tolerate? Suppose there were no
> need to work for a basic living (welfare for all and jobs for the
> few whose
> services are in demand). How many heroin addicts would we want to support?

The question probably isn't "how many?" but rather "what percentage of
income are
people willing to give up to others, via taxation".

Empirical data indicates the number is around 40%. Few nations have > 50%
taxation,
because psychologically people don't want to give up more than half of their
income.

If only 10% of people do useful work, but technology allows the other 90% to
be supported via a < 40% tax rate, the situation will probably be
supportable --
in my view...

> 3) When the possibility for uploading finally arrives in reality,
> will there
> be a first-mover advantage? Can the first homesteaders of cyber-mindspace
> gain permanent advantages over those who come later?

I tend to doubt it. The first people to get on the Net, didn't gain any
lasting advantages.... This kind of complex system is too dynamic ... it
has multiple
phases of development and each phase has new properties which allow new
players to enter
and potentially dominate. Just like an economy: IBM had first-mover
advantage in computers,
until the complex dynamic changed the game and then others took over...

Nevertheless, I'll be trying hard to gain any "first-mover advantages" for
myself ;>

ben

>(If you could control
> the cyber equivalent of the fundamental constants of physics,
> what would you
> do?)

> Regards,
> Michael LaTorra
> mike99@lascruces.com
>
>
>
> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Ben Goertzel
> Sent: Saturday, September 30, 2000 8:12 PM
> To: sl4@sysopmind.com
> Subject: RE: economic effects of AI (was RE: About that E-mail:...)
>
>
>
> It's not a ridiculous argument, but I still think it's a wrong one
>
> People need more than just material goods, people need emotional goodies
> from
> other people, which come in a million different forms
>
> What this means is that as long as there are humans with human bodies
> similar to the
> ones we have now, there will be plenty of service jobs, because
> humans want
> to be surrounded
> by humans doing things with and for them
>
> Also, I predict that when real AI comes about, there will be a long period
> when it is
> complementary to, rather than unmitigatedly superior to, human
> intelligence.
> Each kind of mind
> will have its niche. Ultimately AI will surpass us in all ways, but by
> then, the human body
> may also be obsolete due to other technology advances...
>
> ben
>
> > -----Original Message-----
> > From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> > Of Michael LaTorra
> > Sent: Saturday, September 30, 2000 10:02 PM
> > To: sl4@sysopmind.com
> > Subject: economic effects of AI (was RE: About that E-mail:...)
> >
> >
> > I do agree that commercial AI leading up to SI (at whatever rate of
> > progress) would almost certainly be perceived as a great boon because it
> > will make many people rich and provide tangible benefits to
> others in the
> > forms of new or cheaper goods and services.
> >
> > But this initial "era of good feeling" could change quickly as
> AI advances
> > begin to substitute for more and more "human capital" (i.e.,
> > people's jobs).
> > I am making this argument not because it feels right to me
> > intuitively, but
> > because a very intelligent transhumanist economist has made it.
> Here's the
> > link to, and the abstract of, Robin Hanson's paper:
> >
> > http://hanson.gmu.edu/workingpapers.html
> > [NOTE: Go to the page and scroll down to the title below then
> click it to
> > open the actual PDF file.]
> >
> > Economic Growth Given Machine Intelligence, Aug. '98
> >
> > A simple exogenous growth model gives conservative estimates of
> > the economic
> > implications of machine intelligence. Machines complement human
> labor when
> > they become more productive at the jobs they perform, but machines also
> > substitute for human labor by taking over human jobs. At first,
> expensive
> > hardware and software does only the few jobs where computers have the
> > strongest advantage over humans. Eventually, computers do most jobs. At
> > first, complementary effects dominate, and human wages rise
> with computer
> > productivity. But eventually substitution can dominate, making
> > wages fall as
> > fast as computer prices now do. An intelligence population
> explosion makes
> > per-intelligence consumption fall this fast, while economic growth rates
> > rise by an order of magnitude or more. These results are robust to
> > automating incrementally, and to distinguishing hardware, software, and
> > human capital from other forms of capital.
> >
> > Regards,
> > Michael LaTorra
> > mike99@lascruces.com
> >
> >
> > -----Original Message-----
> > From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> > Of Ben Goertzel
> > Sent: Saturday, September 30, 2000 7:45 PM
> > To: sl4@sysopmind.com
> > Subject: RE: About that E-mail:...
> >
> >
> >
> > Here's another point
> >
> > If the first real AI is a commercial enterprise, it'll be making people
> > money
> >
> > Everyone will own stock in real AI ... it'll be a huge popular
> > sensation ...
> > the financial aspects may drown out any troublesome philosophical
> > aspects in
> > the public
> > mind...
> >
> > if they're making money off it in the short run, not many people
> > will really
> > be thinking
> > about the long run -- this is typical homo sapiens
> shortsightedness, which
> > will work in the favor
> > of cosmic evolution in this case
> >
> > -- ben goertzel
> >
> > > -----Original Message-----
> > > From: owner-sl4@sysopmind.com
[mailto:owner-sl4@sysopmind.com]On Behalf
> > Of Eliezer S. Yudkowsky
> > Sent: Saturday, September 30, 2000 9:23 PM
> > To: sl4@sysopmind.com
> > Subject: Re: About that E-mail:...
> >
> >
> > Josh Yotty wrote:
> > >
> > > I'm willing to bet the people working toward superhuman
> > intelligence will be hunted down. Of course, the people hunting
> > us down will be irrational, ignorant, narrowminded and stupid.
> >
> > Be careful what you fear. Sufficient amounts of hatred tend to
> turn into
> > self-fulfilling prophecies... and if somebody really did try and
> > hunt me down
> > I sure wouldn't want to underestimate them.
> >
> > You'd be amazed at how often witch-hunts don't happen in First World
> > countries. I can't think of anything I ought to be doing in advance to
> > prepare for the possibility of violent protesters, so I don't
> > intend to worry
> > excessively over the possibility until it starts actually
> > happening. There
> > are essentially two strategies to deal with anti-technology
> > crusades; you can
> > try to run quietly and unobtrusively, or you can try for a
> pro-technology
> > crusade. I've observed that ordinary people tend to grasp the
> > Singularity on
> > the first try; it's the people who think they're intellectuals
> > that you have
> > to watch out for - so the second possibility is actually
> > plausible. I don't
> > know if running quietly is plausible - it depends on how long it
> > takes to get
> > to a Singularity. It's starting to look as if we don't bring the
> > issue into
> > the public eye, Bill Joy will.
> >
> > Presently, I think it's not too much to hope for that the
> future will not
> > contain anti-AI terrorist organizations. There are anti-GM groups and
> > antiabortion groups, but it's harder to get public sympathy for
> a violent
> > crusade against something that's only a possibility - I hope.
> >
> > If we do bring the issue into the public eye, turning it into an
> > elitist issue
> > isn't really going to help.
> >
> > -- -- -- -- --
> > Eliezer S. Yudkowsky http://intelligence.org/
> > Research Fellow, Singularity Institute for Artificial Intelligence
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT