Re: Nondeterministic Seed

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Sep 10 2001 - 12:54:50 MDT


Where nondeterminism exists, use redundancy, error-checking, stochastic
redundant error-checking, and so on, to drive the probability of a single
noncoverable error down to effectively zero (i.e, 10^-64 over the age of
the Universe, or whatever).

A seed AI running on a Beowulf network isn't deterministic. The first
reason is things that take mysterious, nonduplicable amounts of time, such
as disk accesses. The second reason is bugs in the code. Not dying as
the result of a single bitflip is a problem that needs to be solved long
before quantum hardware comes into play.

Friendliness doesn't break down when you begin dealing in probabilistic
hardware. A Friendly mind just tries to minimize the probability of (a)
minor local suboptimizations, (b) transient local failures, (c)
nonrecoverable local failures, and (d) nonrecoverable global failures.
It's (c) and (d) that I'd want to see driven down to a probability of
"effectively zero", i.e. 10^-64 over the age of the Universe, or
whatever. But with a large system, and a good design - never mind a
superintelligent design - I'd think that preventing local probabilistic
failures from turning into global nonrecoverable catastrophes would be,
well, not all that hard really. I think you're making too much of the
dichotomy between perfection and imperfection. There is such a thing as
"perfect enough for all practical purposes". If the imperfectness of the
world consists of one pixel out of place every thousand years, then maybe
things are just a little bit overoptimized - maybe too much computing
power is being expended on preventing errors. But if you were really that
paranoid, it would be doable.

Friendliness is based on a mental structure in which errors don't spread
because the mind sees the errors *as errors*, rather than as desirable
normal functioning. So it isn't a catastrophe if a bit flips every now
and then.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT