Re: Maximizing vs proving friendliness

From: Matt Mahoney (
Date: Wed Apr 30 2008 - 07:57:12 MDT

--- Stefan Pernar <> wrote:

> Firstly reducing human complexity to its genetic complexity would be
> ignoring cognitive complexity which can be argued is at least 3-4
> orders of magnitude greater (see

I agree. Landauer estimates 10^9 bits of long term memory. This
counts only high level cognitive memory, i.e. ability to recall words,
images, music clips, etc. We don't know how to measure the information
content of low level perceptual or motor skills.

> Secondly the human genome/memome does not represent a human's utility
> function any more than the rendered Mandelbrot set represents its
> formula.
> What it does represent is one of trillions of evolution's best
> current guesses how to satisfy evolution's utility function.

The genome (not memone) complexity is an upper bound on the complexity
of the part of the brain's knowledge that cannot be learned. This
would include our utility function and our inductive biases. Memes
contribute to our (much larger) programmable knowledge base.


I basically agree with your evolutionary approach. Group selection
favors intra-group cooperation. However, group selection is very slow.
 Each war adds only one bit of information to the genome, as opposed to
individual selection where every death adds one bit. However, this
approach would be feasible with unlimited computational power.

Meanwhile it might be more efficient to code AI the old fashioned way.
If it takes 1 million lines of code at US$1000/line to describe the
human utility function, then we should just spend the $1 billion and do
it without a quibble. This is, after all, only a millionth of the
cost/value of AI, and far cheaper than one war or genocide per bit. We
would be better off spending our efforts where it would make a real

-- Matt Mahoney,

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT