Re: Memory Merging Possible For Close Duplicates

From: Mike Dougherty (msd001@gmail.com)
Date: Fri Mar 14 2008 - 08:47:29 MDT


On Fri, Mar 14, 2008 at 1:15 AM, Lee Corbin <lcorbin@rawbw.com> wrote:

> Ah. Here you mean not only the computer science "shared memory"
> but real human type shared memory.

Yes. Though strained, I don't know what other analogy to use because I am
assuming the merging of states isn't feasible for discussion unless we're
talking about uploaded brains. I guess you could plan for some convoluted
chemistry and physical manipulation of meat, but that seems too icky for
discussion. :) I expect that the uploaded person will be software running
on top of some general virtual person hardware. If so, there will be no
direct way to experience whether your memories are retrieved through a GLUT
or somehow recreated on-the-fly from templates (something like
compression/decompression of a world of context to a few relevant bits that
can be used to reconstruct a most-likely scenario that we believe to be a
real memory)

> > and a context switch back will appear to those inhabiting the suspended
> > environment that the results of those independent threads have been
> > computed instantly.
>
> That's not very clear, IMO. With ordinary raw threads or processes
> running on a computer, sure, one moment the process has access to
> data structures X and Y, and the next, equal access to Z. But that
> entirely ignores the knotty problem of how memories are added to
> people, as you say "instantly".

what makes "ordinary raw thread" different inside your PC, in a computronium
Jupiter Brain or the entire detectable universe? I don't mean this to be a
rhetorical question. Given my previous paragraph (this post) - I would like
you to describe what makes PC threads, their equivalent analogue for the
software mind running on virtual human hardware and the real-world mechanism
(whatever it might be)

> Normally each new experience you have is immediately compared
> on some sort of salience measure to everything else that has ever
> happened to you, i.e., to all your other memories. That's why you
> are "reminded" of things, some of which happened a long time ago.
> Now if you get enough new experiences, the new memories that
> are generated are slowly integrated into all your existing ones.
> The computer analogy seems a little strained here, at least with
> the kinds of algorithms we have today running on our machines.

Now I have a greater appreciation for the trouble you see with close copies
not being close enough to merge. If the salience measure for two different
copies were sufficiently far apart that one would find a new fact compatible
with prior experience enough to learn it while the other was unable to
accept the new information because it was incompatible with prior
experience. In an extreme case we could construct a scenario where the two
copies were lead to believe completely incompatible beliefs (eg: religious
conditioning)

We may need to evolve some method of dealing with this. My guess would be
that our normal memory pruning mechanism could be employed to simply
erase/suppress any incompatibilities. There is evidence (of varying
effectiveness) to suggest that sleep facilitates mental housekeeping. There
is also evidence of psychological defense mechanisms will artificially
create memories to block recall of traumatic events.

Perhaps the reintegration process will involve vetting what experience to
keep from the copy? If you spawn a LeeCorbin_EmptyTrash process, it might
not require the vast knowledgebase of your entire history (possibly only
the history of events since the last time it was invoked) Now this
task/process believes itself to be be LeeCorbin (so far as you would only
authorize such process to an implicit trust as yourself) After this
sub-self has fulfilled its reason for existing and you have verified
success, you may choose to reintegrate the complete experiential record of
that process. In that case, you should have just done the task directly.
At the opposite extreme, you don't subsume any of the experience because you
are confident there is minimal novel experience associated with that task.
The degree to which you care about the experience is probably related to how
much of your Self you originally invested in the creation of the
clone/sub-process.

I realize this isn't exactly a copy or close duplicate (per the subject
line) - Would you call a clone that has _only_ the last 2 minutes of
task-specific knowledge to be a copy? Would you call it a completely
different identity? I think this question comes out of the discussion about
what makes an identity: the model predicting their behavior, or the memory
of prior situations? (tough call because past events are often the raw data
upon which the model is based) Is it possible to observe that I choose blue
rather than red in 100 instances, so you remember only that I prefer blue -
then delete your memory of the 100 instances and retain only the knowledge
that I prefer blue? Upon my next choice will you be able to assess that I
made a characteristic choice of blue? Is there any value to incur the
storage overhead of recording the details of every one of those 100 prior
instances? How much memory optimization do we already perform, that we will
need to be able to do in an uploaded state? Again, I apologize for the
strained computer analogy - but I continue to assume the most logical way
any of these copies exist is after uploading.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT