Re: Can't afford to rescue cows

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Tue Apr 22 2008 - 20:49:23 MDT


--- Stuart Armstrong <dragondreaming@googlemail.com> wrote:

> > Which is good or bad? In whose opinion?
> >
> > Nobody is going to believe that any ethical system other than their own
> is the
> > correct one. The definition of "good" and "bad" that gets programmed
> into AI
> > will depend on which tribe wins the war.
>
> Well, you are part of a small group that has the most chance of being
> the "tribe" that wins the war. What do you want to see? What are your
> opinion of good and bad? Does the fact that you will probably win make
> those opinions less valid?

My utility function can be described using Maslow's hierarchy of needs, like
most other humans. It has been well tested by evolution. But really, this is
the wrong question, as I discuss in
http://www.mattmahoney.net/singularity.html

The reason is that we don't get to design AI. The value of AI is the value of
the human labor it replaces, US $2 to $5 quadrillion over the next 30 years.
We should not expect it to cost orders of magnitude less to build, as AI
researchers have believed for the last 50 years.

It is not enough just to build a human brain, solve language, vision,
robotics, etc. AI researchers have argued that once you build a brain and
educate it, you can make billions of copies very cheaply. But it does not
work that way. Humans form organizations to solve problems that they cannot
solve individually. Organizations are more efficient because each member has
a specialized task. Each member has to be trained individually. That is the
expensive part. Software and training do not obey Moore's law. Until we
achieve AI, humans have to do this training (correcting their inevitable
mistakes) if we want them to do what we want. (After AI, the question will be
whether you trust the machines to do it.)

Maybe you have noticed that the internet is getting smarter. Maybe not. It
is happening slowly. What is happening is that billions of brains and
billions of computers are getting better at communicating with each other. It
becomes easier to find other people or machines that share your interests or
have related knowledge. It is easier to specialize. The internet is starting
to act like a collective, general intelligence. I have proposed a protocol in
http://www.mattmahoney.net/agi.html to make this happen. Even if it is not
adopted in this form, I believe it will happen in some form because there is a
huge economic demand for it. It could end up being a lot messier, maybe with
thousands of partially incompatible protocols, but ultimately the goal of
improved communication between billions of narrowly focused experts will be
achieved. THAT is AI.

I believe the friendliness problem needs to be studied in this context (as a
forecast) rather than as an engineering problem. When most of the computing
power on the internet is carbon based, there is an economic incentive to
attach specialists that serve the interests of humans. Machines compete for
human attention in a market where information has negative value. When the
balance of power shifts to silicon, machines will compete more for the
attention (and computing resources) of other machines. What I believe will
happen in the architecture I described is that the language between peers will
shift from natural language to something incomprehensible. We will no longer
know what our computers are doing. The singularity will then be imminent.

Unfortunately this is not a problem I know how to solve. I do not even know
if it is a problem. If humans are not the dominant form of intelligence, then
whose opinion really matters?

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT