Re: OpenCog Concerns

From: Jeff Herrlich (jeff_herrlich@yahoo.com)
Date: Mon Mar 03 2008 - 23:37:38 MST


"I don't buy it.

Friendliness has nothing to do with keeping AI out of the hands of "bad guys".
Nobody, good or bad, has yet solved the friendliness problem."

Right, I meant "good guys" as a very general term that refers to programmers who both understand the safety issues, and who are committed to building a safe, universally beneficial AGI.

There is a danger from programmers/teams who aren't even aware of safety issues, and another (possibly smaller) danger from programmers/teams who understand the safety issues but who might seek to use the AGI for selfish benefits instead of universal benefits (eg. rogue governments, etc). It's easy to label as science fiction, but it's also not an impossibility.

I think that as proto-AGIs develop we will gain a better practical understanding of AGI safety.

Matt Mahoney <matmahoney@yahoo.com> wrote:
--- Jeff Herrlich wrote:

> I also agree that there are significant risks with the open-source approach.
>
> I think that some of those risks can be partially reduced by having a
> well-resourced, Safe-AI team building a closed-source AGI alongside
> improvements to the OpenCog model (eg. Novamente). IOW, keep the good guys
> "on top of the code".

I don't buy it.

Friendliness has nothing to do with keeping AI out of the hands of "bad guys".
Nobody, good or bad, has yet solved the friendliness problem.

Instances of OpenCog and Novamente would likely be peers in a distributed
query/message posting service like the one I proposed, or some other system
that manages the communication between billion of humans. Their success will
depend on how well and how fast they judge the quality and relevance of
information in a hostile, competitive environment.

As long as humans are the primary source of knowledge, the network will be
friendly because good service is a subgoal of the evolutionarily stable goal
of acquiring computing resources. I estimate a network will be deployed over
the next decade and it will remain friendly for about 30 more years. As the
machines do more of our thinking, humans will become less relevant and the
interchange between peers will evolve from natural language to something
incomprehensible and beyond our cognitive abilities to learn. Shortly
afterwards there will be a singularity.

You cannot ignore that there is a US $66 trillion per year incentive (the
value of all human labor worldwide) to develop distributed AI. I have seen
many proposals to build prototype friendly AI on a desert island, or otherwise
isolated or tightly controlled by its developers. I hope you can see how
impractical these approaches are. You can't compete with the computing power,
knowledge base, and user base already available on the internet.

-- Matt Mahoney, matmahoney@yahoo.com

       
---------------------------------
Never miss a thing. Make Yahoo your homepage.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT