From: Christopher Healey (CHealey@unicom-inc.com)
Date: Wed May 19 2004 - 13:55:23 MDT
My quote was truncated. it should read:
> any black-box emergent-complexity solution is to be avoided >>> almost without exception <<<
The primary point I was supporting is that if you CAN choose, ALWAYS choose the more predictable path UNLESS the potential risk of NOT doing so is greater. Under a known race-to-singularity situation, it may be the more rational choice to trade off a relative amount of predictability for first-to-take-off status. This modifier to the rule, while valid, seems more likely to be used as an "end justifies means" rationalization by those who would act irresponsibly, so I'd be suprised if the SIAI focuses on that part of it in their pop campaign.
Getting back to your point, I don't believe that your equivalence statement is valid.
We're not talking about understanding a mind greater than our own; we're talking about understanding a seed mind to the greatest extent that our available intelligence allows, rather than stop short of the assurance we COULD attain because we're lazy, scared, or impatient for results. It may be that an AGI mind will NOT be able to be instrumented in any human-understandable way. But that's a bad starting assumption. Better to assume we can understand it and find out otherwise, than to never start when we could have accreted more influence over the outcome. We're causally responsible for whatever we set into motion; doesn't it make sense to be 5% accurate instead of 3%, if it IS within our power?
I feel my biggest realization was inverting my own thinking on the topic. FAI is theoretically a quest to achieve the best results, but pragmatically a quest to mediate the worst. After addressing those issues (to whatever extent possible), there is a new target set of "less negative" results to be addressed. If indeed a singularity is approaching, our time-effort window is finite, and the process WILL stop short.
What does appear to be the case, is that we have SOME influence within this window. What I've taken away from the SIAI as their overriding theme, is that we should responsibly use ALL of this time-effort window, and avoid an inferior result from a needlessly premature take-off.
From: firstname.lastname@example.org on behalf of fudley
Sent: Wed 5/19/2004 1:30 PM
Subject: RE: ethics
On Wed, 19 May 2004 12:56:34 -0400, "Christopher Healey"
> any black-box emergent-complexity solution is to be avoided
That's equivalent to saying never make an intelligent machine because
never understand a mind greater than your own. Forget minds, even simple
programs can be beyond understanding. It would only take a few
seconds to write a computer program to find a even number greater than
4 that is not the sum of two primes and then stop, but what will the
will it ever stop? Nobody knows, all you can do is run the program and
watch what the machine does, and you might be watching forever.
John K Clark
-- http://www.fastmail.fm - Does exactly what it says on the tin
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT