Re: ethics

From: Samantha Atkins (samantha@objectent.com)
Date: Wed May 19 2004 - 23:31:19 MDT


On May 19, 2004, at 12:55 PM, Christopher Healey wrote:
>

> We're not talking about understanding a mind greater than our own;
> we're talking about understanding a seed mind to the greatest extent
> that our available intelligence allows, rather than stop short of the
> assurance we COULD attain because we're lazy, scared, or impatient for
> results.

The amount of assurance we can attain, from my experience at least, is
much smaller than you seem to hope. The amount of assurance we can
obtain for systems meant to be static and predictable fully isn't all
that large. If this is true of static programs of more than modest
capacity then how much more true is it of a program designed to
recursively self-improve into an intelligence much greater than our
own?

> It may be that an AGI mind will NOT be able to be instrumented in any
> human-understandable way. But that's a bad starting assumption.
> Better to assume we can understand it and find out otherwise, than to
> never start when we could have accreted more influence over the
> outcome. We're causally responsible for whatever we set into motion;
> doesn't it make sense to be 5% accurate instead of 3%, if it IS within
> our power?
>

We will understand the seed at a very large stretch. We will not
understand it for much beyond that.

> I feel my biggest realization was inverting my own thinking on the
> topic. FAI is theoretically a quest to achieve the best results, but
> pragmatically a quest to mediate the worst. After addressing those
> issues (to whatever extent possible), there is a new target set of
> "less negative" results to be addressed. If indeed a singularity is
> approaching, our time-effort window is finite, and the process WILL
> stop short.
>
> What does appear to be the case, is that we have SOME influence within
> this window. What I've taken away from the SIAI as their overriding
> theme, is that we should responsibly use ALL of this time-effort
> window, and avoid an inferior result from a needlessly premature
> take-off.
>

I am not sure I even believe any of us is fully capable of judging when
a takeoff is premature. It may be than any SAI gives us at least a
prayer of surviving and thriving beyond a certain point in time. It is
difficult to believe that we have the tools and methodology to really
coerce the resulting SAI to our liking. That doesn't mean we don't
try our best. But it also doesn't mean that holding back until we
really understand is a program likely to terminate before the human
race does.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT