Re: [sl4] A model of RSI

From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Thu Sep 25 2008 - 03:22:40 MDT


> Define "improvement". In the context of a real machine (a finite state machine), we could relax the definition of a goal so that utility is a monotonically increasing function of time but reaches some maximum after finite time. Then we could define improvement as reaching that maximum faster or reaching a greater maximum in the same time. However, a finite state machine can only run a finite number of programs, so there can only be a finite sequence of improvements by any definition.

This is not a problem. I'm pretty sure humans are finite state
probabilistic machines. "Finite" includes numbers so brutally high
that they might as well be infinite from the human perspective (and
possibly from the perspective of the visible universe).

> Self improvement requires a test or goal that cannot be altered through generations. Assuming that goal is "intelligence", we are not smart enough to test for it above our own level. If we are, then perhaps someone could describe that test.

I've already proposed a gaggle of tests - mainly taking an open ended
task (running a sucesfull company, organising an election campaign,
etc...) with a clear relative standard of sucess, and setting the AI's
head to head. A sucessful test just means a better understanding of
what we really want.

> We assume (but don't know) that adding more neurons to our brains would make us more intelligent and therefore better. But why do we want to be more intelligent? Because our brains are programmed that way. Intelligence requires both the ability to learn and the desire to learn. Suppose that we engineer our children for bigger brains, but in doing so we accidentally remove the desire to be intelligent. Then our children will engineer our grandchildren according to their interpretation of what it means to improve, not our interpretation.

As much testing for superior intelligence isn't the problem, tradeoffs
like that will be. We can test for superior intelligence (by our
definition), but without superior intelligence ourselves, we don't
really understand the tradeoffs that our tests are forcing on the
AI's. It might be that a slight modification of our tests,
insignificant for us, will result in dramatic changes in the resulting
AI's - but we'll never know that ourselves.

Stuart



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT