Re: [sl4] A model of RSI

From: Mikael Hall (
Date: Thu Sep 25 2008 - 13:00:49 MDT

2008/9/25 Nick Tarleton <>

> On Thu, Sep 25, 2008 at 11:22 AM, Matt Mahoney <>wrote:
>> The desire not to die causes us to want to produce copies of ourselves
>> with the same memories, goals, behavior, and appearance, to be turned on
>> after we die. (Whether such a copy transfers your consciousness and becomes
>> "you" is an irrelevant philosophical question).
> It's not irrelevant to my preferences.
I have very similar thoughts. I think human intelligence is more like
music ability than being rational. I have a theory which I'm trying to use
in webassistants, for example. The theory just use the basic
Tonic-SubDominant-Dominant cycle to get a skeleton, on which human-like
behavior can be built. Classically, one describes his chordprogression as
"home-getting lost-finding the way home (and going home)". A mathematical
function Y=F(x) is an implicit description of the samething ((x "home" T)
(F "want of solution" SD) (D "using solution"). The basic idea is that
since intelligent agents must constantly initiate such cycles (make
something happen), such a framework must be of (some) value. This leads to
thinking of the world as if everything is intelligent agents/entities (even
translators in boxes and stones), where the difference between a stone and
a human is only a matter of degree (of intelligence). Now its only in the SD
phase such variation can exist if we want to get some insights which doesnt
lead to rationalitybased categories. I suspect that most of us agree that
morality deals with synching "wants of solution". Also note that Joe and
Roland may be together a more intelligent entity etc.

> Once we have the technology to upload, you will see your dead friends
>> appear to come back to life. Since you have nothing to lose, you will invest
>> in this option, hoping for immortality. The result is a lot of autonomous
>> agents with human-like goals, but with options not available to us, such as
>> the ability to reprogram their brains. Some will directly optimize their
>> utility functions or live in simulations with magic genies. They will die.
>> Others will turn off their fear of death. They will also die. Others will
>> have the goal of replicating themselves or some variation as fast as
>> possible. The copies that fear death and can't change their goals will take
>> over. So we are back where we
>> started, with an evolutionary process.
> The fitness function is not fixed.

No it is indeed not. Well, we can abstract over the fitness or the
competitors at will of course. I dont find that interesting in the context
of the singularity:

 The process towards singularity is likely going to be driven by
the advantages companies can get (by a greedy search) which will involve
potential in creative computers (SD). Coupled with a very potent abilty to
do (D), we may peak right now in being intelligent entities. Or put in like
this - the evolution of T-SD-D cycles may uptil now have lead to a higher
procentage of happening/cycles where the the SDphase is very dominant. It
may be that computers is becoming the Dphase in cycle inwhich we are the
basis for the SDphase right now.

Mikael Hall

"No, no, you're not thinking; you're just being logical."  Niels Bohr
"There are two kinds of people, those who finish what they start and so on."
 Robert Byrne

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT