Re: Multiple Future Bens

From: Gordon Worley (
Date: Mon Dec 23 2002 - 16:37:38 MST

On Monday, December 23, 2002, at 01:35 PM, Rafal Smigrodzki wrote:

>> To be honest, I have some doubts about the Ben5 scenario myself. I'm
>> not at all yet certain that the kind of consciousness we are isn't
>> inherently complexity-limited in some ways. A paranoid personality
>> allowed to evolve and self-modify would quickly spin into some form of
>> self-destruction, I'd given infinite self-modifiability,
>> wouldn't the smallest flaws in our personalities or mental health
>> spiral out of control? Perhaps we could modify our personality in
>> such a way to avoid this, but that presumes we know how to handle the
>> self-modification tools and know what to change and how, which in turn
>> presumes we're "given" those tools and that knowledge by an external
>> process such as an FAI. I can't quite picture us bootstrapping it all
>> the way given just the "mental room"...
> ### Very good points. Aside from getting a little help from the
> Friendly AI
> and other grown-ups, there is a way of dealing with the risks from
> self-modification: copying at each major redesign, so the new version
> can
> start exploring its capabilities, while the remaining parts of your
> spawn
> look closely for any signs of malfunction. In case of major problems,
> the
> spawn would sever the links to the new version, and make a mental note
> not
> to use the particular modifications again. There is a similar approach
> in
> Greg Egan's "Diaspora", where the infrastructure of the polis charged
> with
> production of new citizens keeps tracking the performance of each
> design and
> adjusts the frequency of its shapers accordingly.

This sounds morally suspect. The reason for creating a copy would be
that you don't believe that your modification will work; it's just a
guess and you want to try it out. This is, in a sense, what evolution
does: it `tries' phenotypes and if they work then more things are made
with those phenotypes.

Some kind of revision control system would at least be a step in the
right direction (it's not a solution, though, because you still have
cases where your new revisions make it impossible to revert or switch
branches) if you're worried that you're going to screw up. A better
plan, though, is to, instead of, like a human, trying to approximate a
mind programmer, you could be an actual mind programmer and get it
right all of the time.

Now, you might still want to make copies or children or something like
that, which is perfectly moral so long as you have minimal life
resources for them and you are creating copies that you honestly
believe will do well.

Gordon Worley                          "Man will become better when                 you show him what he is like."                                --Anton Chekhov
PGP:  0xBBD3B003

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT