From: Matt Mahoney (email@example.com)
Date: Tue Dec 01 2009 - 10:41:43 MST
John McNamara wrote:
> What is the maximum tolerable error that will not result in the failure of your engineering project (ie upload of a live human with no apparent deviations from expected normal thinking patterns (including fuzzy things like emotions/inspiration etc) for at least 1000 years with 99.9999 confidence level etc etc).
Suppose there was a program that simulated you so well that nobody could tell the difference between you and the program in a Turing test environment. What is the probability that the program will be you after you shoot yourself?
-- Matt Mahoney, firstname.lastname@example.org
From: John McNamara <email@example.com>
Sent: Tue, December 1, 2009 9:35:55 AM
Subject: Re: [sl4] Re: goals of AI
First post to list (braces for the bullet) and observation on this thread's debate.
To me this sounds like a matter of simulation resolution.
A human mind is the information output 'artefact' of a physical system.
We can choose to simulate that physical system over a wide range between the following 2 extremes.
a: extreme low resolution
1 bit, 1=mind is 'on' 0 = mind is 'off'
only useful in financial accounting obviously
b: extreme high resolution
simulate using _all_ information on the system
as we don't have a final complete physics theory of everything we obviously cannot even determine if this is even possible
it would effectively be an absolutely perfect simulation of actual physical reality all the way down past quarks, strings and n dimensions to whatever idea is at the very bottom.
a bit beyond sl4 i suspect
between a and b is a large range
it's possible (pending TOE) that b resolution simulation guarantees a zero error rate in the simulation output, the mind. Barring non-physical influences that would leave no wiggle-room left to say that the simulation isn't perfect in every way.
Any level of simulation below b introduces errors in the output data all the way down to having just 1 bit of reliable data at simulation level a
Therefore its a matter of deciding the acceptable error level for your objective.
All you need is a mastery of the physics and math required to get down to your required level of acceptable error rate.
b involves all sorts of things we not good at like probability and infinities.
There are 2 separate questions here.
Is any non-zero error acceptable in principle ?
This is a philosophical question I think, not an engineering one.
What is the maximum tolerable error that will not result in the failure of your engineering project (ie upload of a live human with no apparent deviations from expected normal thinking patterns (including fuzzy things like emotions/inspiration etc) for at least 1000 years with 99.9999 confidence level etc etc).
This is a practical engineering problem for a branch of engineering that doesn't exist yet.
my answers for the curious
I'm not comfortable with a non-zero error I must admin, now that it's "on the menu" so to speak. That said, the now pre-upload me would jump at any error rate accepted by sane-looking engineering types as an alternative to oblivion. I wouldn't be surprised if the post-upload me wanted a lot of virtual beer to get over the whole thing.
no idea, but I wouldn't be stunned to learn that something more detailed than neural charge levels was required. Which would be unfortunate because it would be harder. Perhaps I'm pessimistic on this one.
Apologies if this has wandered off-topic.
John Mc Namara
This archive was generated by hypermail 2.1.5 : Mon May 20 2013 - 04:01:22 MDT