Re: [sl4] A model of RSI

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Thu Sep 25 2008 - 08:28:22 MDT


--- On Wed, 9/24/08, Bryan Bishop <kanzure@gmail.com> wrote:

> On Wednesday 24 September 2008, Matt Mahoney wrote:
> > We assume (but don't know) that adding more
> neurons to our brains
> > would make us more intelligent and therefore better.
>
> Whales have larger brains.
>
> > But why do we want to be more intelligent?
>
> We want to be more extropic and effective, if that means
> intelligent
> then so be it.
>
> > Because our brains are programmed that way.
>
> What?

When people want to "improve", it usually means to be smarter, stronger, faster, richer, healthier, and more attractive to the opposite sex. Guess which of these just happen to increase our evolutionary fitness?

> > Intelligence requires both the ability to learn and
> the desire
> > to learn.
>
> Hardly. I know many people who are very unmotivated to do
> much of
> anything, yet are 'intelligent' as you would call it.

Ask your unmotivated friends whether they would rather spend 6 hours watching TV or 6 hours staring at the wall.

> > Suppose that we engineer our children for bigger
> brains,
> > but in doing so we accidentally remove the desire to
> be intelligent.
> > Then our children will engineer our grandchildren
> according to their
> > interpretation of what it means to improve, not our
> interpretation.
>
> Fine, then set up your body or lab on an automated cyclic
> reproduction
> regiment so that you spit out your 1st-generation children,
> keep them
> from modifying the source code. Then it's
> self-contained within that system.

If they can't modify their source code, how can they improve?

The more general problem is that you cannot simulate your own source code. You cannot predict what you will think without thinking it first. You need 100% of your memory to model yourself, which leaves no memory to record the output of the simulation.

Nor can you cleanly separate your fixed goals from the rest of the code. This is what I am trying to show in the my paper. When you formally define what it means for a program to have a goal, it can't improve with respect to that goal faster than O(log n). This loses to faster methods that accept external input such as learning and evolution. But those methods don't allow the improving agent to choose its goal.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT