Re: Definition of strong recursive self-improvement

From: Randall Randall (randall@randallsquared.com)
Date: Sat Jan 01 2005 - 14:46:26 MST


On Jan 1, 2005, at 2:41 AM, Randall Randall wrote:
> However, no amount of simulation will allow you to
> design an algorithm that is more intelligent than
> you are, since it must be executable within your
> current algorithm.

Bad form to reply to one's self, but:

Of course, in a strict sense, if simulations are
part of the current algorithm, there is no possible
algorithm which is more intelligent in a "strong"
sense. However, the rest of what I wrote still stands,
since there will be algorithms which cannot be
simulated on the current hardware with the current
algorithm, which are in fact better than the current
algorithm (but perhaps not provably so). Even given
this, though, they can't be *that* much better than
the current algorithm, since that would imply they
are simulable with a VM that can fit in the current
system. Even a tiny difference in effective speed
or memory use may grow quite large in millions of
iterations, of course, and so the edge still goes to
AI without goals which require internal simulation
(goals which include Friendliness, e.g.).

In the past, of course, Eliezer has mentioned using
proofs, which might replace some kinds of simulation.
Proofs of the correctness of code have so far proven
(sorry!) elusive.

--
Randall Randall <randall@randallsquared.com>
Property law should use #'EQ , not #'EQUAL .


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:50 MST