Re: Risks of distributed AI (was Re: Investing in FAI research: now vs. later)

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Fri Feb 22 2008 - 08:59:12 MST


--- Daniel Burfoot <daniel.burfoot@gmail.com> wrote:

> On Fri, Feb 22, 2008 at 2:30 AM, Matt Mahoney <matmahoney@yahoo.com> wrote:
>
> > I described one possible design in http://www.mattmahoney.net/agi.html and
> > did my thesis work to show that a very abstract model of this architecture
> > is robust and scalable.
>
>
> This is a nice idea - I hope you pursue it further.
>
>
> > The idea that AI could fall into the "wrong hands" is like the Internet
> > falling into the wrong hands.
> >
>
> Consider the following scenario. A pseudo-AI capable of high performance
> computer vision understanding, speech recognition, and natural language
> comprehension is developed and made widely available (perhaps a descendant
> of PAQ).
>
> Individuals with the semi-intelligent system can do all kinds of neat
> things. They can build robots to fetch them coffee. They can teach their
> cars how to drive. They can program face-recognition security systems that
> protect their houses from burglars.
>
> The government can do all of these little tricks too. However, the
> government also has access to the following physical infrastructure:
> 1) visual surveillance systems in all public places
> 2) apparatus to monitor electronic communications across the globe
> 3) robotic soldiers
>
> It seems to me that:
> P(semi-intelligence) << P(semi-intelligence + physical infrastructure)
>
> where P(..) is the "power function" - the amount of power/capability/utility
> that a technological system makes available to its controller.
>
> In our current system, there is a delicate balance of power between
> individuals and the state. Many people would argue that the balance is
> currently tilted too far towards the state. Regardless, the introduction of
> semi-intelligence would seem to dramatically upset the balance of power,
> even if it is made widely available, because of the way semi-intelligence
> interacts with the pre-existing physical infrastructure the government has.
>
> If you basically trust the government, then the above scenario shouldn't
> worry you too much. I do not trust the government. We have no foolproof way
> to guarantee that the reins of government power do not fall into the hands
> of evil men. Tyranny has plagued humanity since the beginning of
> civilization.
>
> Dan
>

If you look at the different countries today you'll notice an inverse
correlation between technological advancement and government corruption. An
infrastructure that allows free communication between people makes it harder
for government officials to keep secrets. I did not design distributed AI
with a political agenda, but this is a system that would be implemented
worldwide, not controlled by any group, but by all its users, allowing
messages to go anywhere without restriction on content or who can send or
receive them. The protocol requires that messages be associated only with the
sender's reply address, which can be temporary and anonymous. Some
governments could see this as threatening and try to restrict it, but I think
they would lose the race for technological advancement in the same way they
would if they cut off internet and phone access.

My bigger concern is that people are increasingly depending on computers to
deal with the complexity that computers help create. When machines do
everything for us including think for us, our role in shaping the future is
diminished. My design does not solve this problem.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT