Re: What are "AGI-first'ers" expecting AGI will teach us about FAI?

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Sat Apr 12 2008 - 19:58:45 MDT


--- Rolf Nelson <rolf.h.d.nelson@gmail.com> wrote:

> On Fri, Feb 29, 2008 at 6:02 PM, Ben Goertzel <ben@goertzel.org> wrote:
> > The fact that AGi ethics is incredibly badly understood right now, and
> > the only clear route to understand it better is to make more empirical
> > progress toward AGI. I find it unlikely that dramatic advances in AGI
> > ethical theory are going to be made in a vacuum, separate from
> > coupled advances in AGI practice. I know some others disagree on
> > this.
>
> For any of the many people who agree with Ben's sentiment:
>
> Large numbers of people have made various AI advances in the past. In
> none of these cases, to my knowledge, have FAI people said, "a-ha,
> that's one of the pieces of data I was waiting for, this advances FAI
> theory." Why would we expect this to change in the future? At the very
> least, doesn't this show that even if FAI advances require AGI
> advances, the "bottleneck" is that there are too few people working on
> deriving FAI from existing AGI, rather than too few people working on
> existing AGI?
>
> Are there specific facts about AGI that you're waiting to find out,
> such that if the result of a pending experiment is A, then successful
> FAI theory lies in one direction, but if the result is B, then
> successful FAI theory lies in a different direction? If so, what are
> such facts?

I agree with Ben. Discussions of friendliness seem to break down into widely
divergent views in the absence of even the most fundamental agreement on what
AGI will look like.

I believe it is easier to analyze threats in the context of specific
proposals. My proposal for competitive message distribution at
http://www.mattmahoney.net/agi.html is quite different from the usual attempts
to build something resembling a human mind. I believe that AGI will emerge in
the form of a large collection of narrow domain experts and an infrastructure
that routes messages to the right experts. I estimate that it will cost USD
$1 quadrillion and take 30 years to reach parity with carbon-based
intelligence. I argue that it will be built because AGI is worth that much
and the system provides economic incentives for people to contribute.
Information has negative value on average, so peers that are the most helpful
to humans by identifying and routing the most useful information and filtering
the rest will be the winners in a market where peers compete for reputation
and computing resources.

In my proposal I have identified several types of attacks, such as spam and
forged messages. I am sure I have overlooked something. Two that I did not
mention:

1. Intelligent worms. Security tools are double-edged. There is practically
no tool used to defend information systems that isn't also useful to an
attacker. (For example, systems that probe for vulnerabilities, test for weak
passwords, check files against a suite of virus detectors, etc). A big source
of vulnerability is software bugs. We would like for AI to automatically
analyze software and test it for vulnerabilities, which currently only humans
can do. If this technology was available, a worm could use it to discover and
immediately exploit thousands of vulnerabilities and quickly take over nearly
every computer on the internet.

2. Getting what you want. Distributed AGI grants immediate wishes, not our
extrapolated volition, because that is what the economic system rewards.
Humans evolved in a world where we can't have everything we want. What
happens to the human race when we can all have eternal bliss (wireheading) or
a magic genie (in a simulated world)?

I believe my proposal is friendly only in the near term. I don't have
solutions for the long term problems.

> At what point will you know that AGI has advanced enough that FAI can
> proceed?

I am pessimistic. The early designers of the internet (TCP/IP, HTTP, HTML)
could not have anticipated today's security problems, and could not have run a
simulation to test for them. I don't believe we can anticipate all of the
problems that will arise from AGI until we actually build it, and then it will
be too late because we won't know what our computers are doing any more.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT