From: Alden Jurling (nakomus@cnsp.com)
Date: Sun Dec 02 2001 - 20:23:43 MST
I've been lurking here for awhile, but I felt I should reply to this.
Your algorithm/computation need only consider the elements that are relevant
to what you're modeling. If you assume that the SI running the
simulation/computation is doing it for a *reason, then you can expect the
simulation (and hence the minds in it) to be optimized for whatever that
would be. Many questions that can be answered by a complete simulation of
some phenomenon can also be answered by a simulation of a approximation of
that phenomenon, at much reduced cost in computing power.
----- Original Message -----
From: James Rogers <jamesr@best.com>
To: <sl4@sysopmind.com>
Sent: Sunday, December 02, 2001 7:34 PM
Subject: Re: New website: The Simulation Argument
> On 12/2/01 5:51 PM, "Gordon Worley" <redbird@rbisland.cx> wrote:
> >
> > A SIMULATION involves creating new minds who are defined as being self
> > aware and independent minds.
> >
> > A COMPUTATION would a simulation where the minds involved weren't
> > independent or even really self aware, but function based on an
> > algorithm very much like that in a mind. Just as we can model simple
> > life forms (e.g. ants), we will all look like little ants to Powers.
>
>
> This is almost the same argument as the "Can A Giant Look-Up Table Be
> Conscious" argument that was had a while back (though I don't know if that
> was on this list or the Extropians or both). I don't see a functional
> difference in your definitions. Any algorithm sufficiently accurate to
> perfectly predict all changes of state of the actual phenomenon is
> mathematically equivalent to a perfect simulation of that phenomenon (some
> restrictions apply -- see below).
>
>
> > The difference is important because, just like the artificial ants, the
> > artificial humans won't be real in that they won't have been programmed
> > as full minds, but to respond *like* they are full minds.
>
>
> The only way this difference can be true mathematically is if you assume
> that the human brain is not a finite state machine. I personally am of
the
> opinion that the human brain is a finite state machine (in all aspects
that
> matter), and therefore have to reject your difference. From the
assumption
> that a human mind is essentially an FSM, any code capable of fully
> responding like a human mind would be the same as the code necessary to
> simulate a human mind (Kolmogorov complexity).
>
>
> > Also, going back to your initial note, even if the problem can't be
> > reduced, it should be possible to run a very large computation without
> > having to go into simulations. It's really a question of how many
> > resources some SIs are willing to put into figuring out how some humans
> > acted or might have acted. Unless, of course, you're assuming that
> > there is some inherent difference between algorithms that respond like
> > humans and actual human mind algorithms.
>
>
> The problem is that I assume that the code required to respond exactly
like
> a human is identical to the code required for "human mind algorithms".
> Furthermore, I think this runs into a weak version of the Halting Problem.
> There is no way to predict some future state without running your
algorithm
> through all the intermediate states. In this context, I don't see any
> difference at all between a "simulation" of a human mind and the
computation
> of state transitions of something that responds exactly like a human mind.
> Both of these activities denote identical computational processes.
>
> Cheers,
>
> -James Rogers
> jamesr@best.com
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT