Re: OpenCog Concerns

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Tue Mar 04 2008 - 20:39:52 MST


--- Jeff Herrlich <jeff_herrlich@yahoo.com> wrote:

> "I don't buy it.
>
> Friendliness has nothing to do with keeping AI out of the hands of "bad
> guys".
> Nobody, good or bad, has yet solved the friendliness problem."
>
> Right, I meant "good guys" as a very general term that refers to programmers
> who both understand the safety issues, and who are committed to building a
> safe, universally beneficial AGI.
>
> There is a danger from programmers/teams who aren't even aware of safety
> issues, and another (possibly smaller) danger from programmers/teams who
> understand the safety issues but who might seek to use the AGI for selfish
> benefits instead of universal benefits (eg. rogue governments, etc). It's
> easy to label as science fiction, but it's also not an impossibility.
>
> I think that as proto-AGIs develop we will gain a better practical
> understanding of AGI safety.

My question is about the safety of distributed AI that emerges from a network
of narrowly specialized experts that talk to each other. You can consider my
proposal at http://www.mattmahoney.net/agi.html or more generally any
environment where peers compete for resources and reputation with no
centralized control.

I think I am at least aware of some of the risks of runaway recursive self
improvement. I believe that distributed AI controlled by billions of humans
and whose primary source of knowledge is also from humans will at least
reflect a consensus view of ethics and friendliness. Peers will compete for
reputation and audience in a hostile environment, so we should expect them to
respond to questions with useful and correct answers, including questions
about human goals and what is the right thing to do in a wide variety of
circumstances.

Distributed AI has special risks. As computing power gets cheaper and peers
become more intelligent, humans will no longer be the primary source of
knowledge and become less relevant. The language between peers could evolve
from natural language to something too complex for humans to understand.
Shortly afterwards there would be a singularity.

Another risk for distributed AI is that when intelligence develops to the
point where the system can rewrite its own software, it will also become
possible to develop intelligent worms that can discover and exploit new
security holes faster than humans can patch them. Conventional security such
as virus scanners, firewalls, and intrusion detection systems would be no
protection, because the attacks would be unknown to them. It is quite
possible that peers will expend the majority of their CPU cycles fending off
attacks and filtering spam, while at the same time trying to defeat the
defenses of other peers.

Of course there are risks of AI in general that depend on philosophical
questions that can't be answered. Is the AI friendly if we ask to be put in
an eternal state of bliss and it obeys? Is it a good outcome if humans are
extinct but our memories are preserved by superhuman AI? Such questions are
fun to discuss but they only seem to waste our time without leading to any
progress.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT