From: Matt Mahoney (firstname.lastname@example.org)
Date: Sat Mar 14 2009 - 16:04:27 MDT
--- On Fri, 3/13/09, Vladimir Nesov <email@example.com> wrote:
> On Sat, Mar 14, 2009 at 5:03 AM, Roko Mijic
> <firstname.lastname@example.org> wrote:
> > Well, I could respond "unless you can show me a model of the internet acting
> > as an agent rather than a large repository of data, I think the focus should
> > be self-improving intelligence".
AGI = lots of dumb specialists + an infrastructure for getting messages to the right experts + market incentives to build it. What the internet lacks is a distributed index, which would be 1000 times more powerful than Google, interactive (updated instantly and able to initiate conversations), and not owned by anyone.
> > But, I like to have an open mind. You are not the only one who has negated
> > the geographical locality assumption. Maybe you are correct.
My objection is to the various small scale AGI projects by people who think that if they are smart enough to build a machine smarter than themselves, then that machine could do likewise and take over the world. It sounds simple, but unless you are smart enough to build your computer out of dirt, and to teach yourself computer science without parents, school, or language, then that is not what you are doing. It is the collective intelligence of humanity that is building the machine (with you playing a tiny role), and that machine is not smarter than humanity even if it is a million times smarter than a human.
I don't object to the idea of recursive self improvement (RSI). If you measure intelligence by the ability to do work or satisfy goals, or define it as speed + memory + I/O bandwidth + knowledge, then it is obvious that humanity is collectively improving in all of these respects. We are doing it by growing in population, by becoming better organized through the development of language, writing, and communication technology, by accumulating knowledge, by supplementing the network of human brains with computers, and to a very small extent by evolution.
> > If you are correct, what should we do? Probably not much in particular;
> > simply look forward to a world that becomes more and more intelligent over
> > time. Perhaps one would recommend strengthening the democratic process and
> > the rigor of public policy debate.
We should start by understanding the process better. Humans are unable to recognize intelligence higher than themselves. It is not just an ego problem, but I believe fundamental at all intelligence levels. A group of people will make better decisions than its members only if some members disagree with the majority.
Yet RSI happens anyway. I would be hard pressed to say that the goal of humanity is to make itself smarter. Rather, it is the goal of individual humans to compete for resources. RSI happens when individuals cooperate through language and trade.
> The Friendliness issue doesn't go away.
Correct, but it puts the issue in a new light. I perceive two main threats:
1. Unfriendly humanity, where computers acquire the majority of resources in competition with humans. This could happen if we subvert our goals, e.g. wireheading, drugs, and uploading to simulated worlds.
2. Spontaneous RSI in competing groups through a rapid process, for example, the evolution of antibiotic resistant bacteria. An internet with vast computing power provides an infrastructure for competing processes such as intelligent viruses and worms, or self replicating nanobots. Non-DNA based life has the potential to evolve at arbitrarily high rates.
In other words, racing to build the first AGI and guaranteeing its friendliness (as if we knew how) is attacking the wrong problem.
-- Matt Mahoney, email@example.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT