Re: [sl4] Friendly AI and Enterprise Resource Management

From: Matt Mahoney (
Date: Thu Oct 07 2010 - 16:19:15 MDT

From: Mindaugas Indriunas <>

>It might be that one of the best ways to bring about the friendly AI is by
>trying to be very rational about one's own actions, defining one's own goal of
>life, and doing it in such a way that the resulting goal would be the objective

>good; and consequently designing a resource management system to automate and
>optimize one's own resource management decision making to achieve personal
>prosperity, which is identical to the prosperity of everyone.

Depends on what you mean by "friendly". Eliezer's old definition
( ) works as long as there are well defined
boundaries between human and non-human, which will not be the case with AI. I
prefer to define "friendly" as meeting the goals of humanity, with conflicts
resolved as if by a secrecy-free ideal market. But my definition has the same
criticism. We cannot agree now if embryos, animals, or foreign born humans count
as "humans" with the same rights you have. It will be even harder when we have
human-machine hybrids and uploads that can make millions of copies of
themselves. That's important because markets are not fair. Without rules to take
from the rich and give to the poor, the smart and the rich would get richer,
leaving the others out to die. But that may not matter if our silicon
descendants aren't programmed with the same sense of fairness that we are.

I think that our inability to define "friendly" concisely is because it is
algorithmically complex, on the same order as the sum knowledge of humanity.
That's about 10^17 bits stored in human brains, assuming 10^9 bits per brain and
assuming that 99% of what you know is known to at least one other person. Our
legal system at all levels (from international treaties to local regulations)
attempt to define what we mean by good behavior, but even that is obviously
oversimplified and insufficient.

In order for computers to do what you want, they have to know what you know,
because effective human communication depends on both parties already knowing
most of what the other knows. Humans can only communicate at a rate of a few
bits per second. Getting all that knowledge into an AI will either take decades
of observation of the global population or some advanced technology such as
brain scanning to get the data quicker. Either way, AI should be friendly as
long as we develop it with continuous feedback and its intelligence does not
exceed that of humanity. (It has already surpassed individual humans). After it
surpasses humanity, we will no longer control it, although it could make us
think we do. We may or may not still be relevant, depending on whether you think
"we" includes the machines that augment our intelligence by outsourcing our

Sorry if the problem seems harder than you thought. It's not the first time the
topic has come up.

 -- Matt Mahoney,

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT