From: Matt Mahoney (matmahoney@yahoo.com)
Date: Sun Oct 26 2008 - 10:22:15 MDT
--- On Fri, 10/24/08, Stuart Armstrong <dragondreaming@googlemail.com> wrote:
> My objection has always been to the
> "extrapolated" aspects of it. It
> seems entirely credible that a CEV constructed from me
> would conclude
> that humanity should be killed off for some reason. I
> wouldn't follow
> it down this path, and I don't see why I should.
Indeed, this will kill any chance that we will actually implement CEV. We are not going to build a machine that will be our master. Consider these alternatives:
1. AGI will force you to eat your vegetables and not shoot heroin because it knows that is what your future self will want.
2. AGI will give you information that eating your vegetables will make you healthier than shooting heroin, but leave the choice up to you.
3. If the AGI decides you are not intelligent enough to understand the message in (2) (i.e. you ignored its advice), then it will augment your intelligence by programming into your mind the correct beliefs. Afterward, you will want to make the "correct" choice.
We will not build (1) because AGI is expensive, you are paying for it, and it is not what you want. We could accept (2) and maybe (3). However, there is a technical issue with (3) that goes beyond the ability to program human brains, which is this:
AGI is a global brain. The intelligence threshold of AGI is not one brain, but 10^10 brains. Excluding computers, there are already entities with superhuman intelligence. An organization of people has more computing power, more I/O, more memory, more knowledge, and a faster learning rate than any of its members. It is a known fact that groups of people collectively make more accurate predictions than any of their members by e.g. voting or markets. (This principle is also applied in machine learning). However, collective decision making would not work if all of the members agreed with each other. If they did, then any member could predict what the group would predict. It is a requirement that members must believe they are more intelligent than the group (that they are right and the group is wrong).
This is a problem for (3). AGI is a collective intelligence made up of humans, machines, and a communication infrastructure for getting messages to the right specialists. However, if the AGI could program its members to conform, then it would no longer be more intelligent than its members.
> Most CEV advocates claim that simple caveats like
> "don't kill off all
> humans" should be added. Eliezer mentioned a
> "final judge" which would
> decide whether to implement the CEV or not, a conceptually
> similar idea (though in practice much better).
I agree a final judge will not work. A judge has to have greater algorithmic complexity than CEV in order to make an intelligent decision. Who will that be? Here are some estimates:
1. http://intelligence.org/upload/CEV.html is 10^5 bits (I measured it :-) ). Of course this is not an implementation.
2. The world's legal system is around 10^14 bits, assuming the business of writing and reviewing legislation at all levels of government is 10^-4 of our economic output.
3. A model of the world's population of human minds is 10^17 to 10^18 bits assuming 10^10 people, 10^9 bits of long term memory each (as estimated by Landauer), and 90% to 99% overlap of knowledge (estimated by the cost of replacing an employee). Note that this number is growing due to (a) population growth, (b) decreasing overlap through specialization as the economy becomes more efficient and (c) transfer of knowledge to machines.
Nor will simplistic rules like "don't kill off all humans" work. CEV does not define "kill" or "human". Those terms are only defined in a world without AI. Is an upload a human? Which variations? Is it killing if you leave a copy?
-- Matt Mahoney, matmahoney@yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT