From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Thu Dec 14 2000 - 18:53:27 MST
Something which I wrote down for the first time today, which I've always
taken for granted, but I just now realized have never seen anywhere else:
A seed AI will eventually able to find out *exactly* what the programmer
meant by a definition in Friendly AI, or the *exact* reason why a
programmer gave a specific answer for a scenario, even if the programmers
do not themselves know, through nondestructive nanotechnological scanning
of the programmers' brains. During the initial stages a seed AI can only
analyze the programmers by asking questions and observing responses, but
there *will* be an opportunity to clear up lingering questions later on.
That degree of direct access might not turn out to be necessary;
transhuman intelligence might suffice to figure out the causes completely
from observed fact. Nonetheless, if necessary, the big guns are there.
For the record, I volunteer in advance, even if it's not my AI. (If, of
course, volunteering is ethically necessary for this procedure.)
Note that a neuron is pretty large compared to a nanobot, so
nondestructive scanning shouldn't be any problem. Nondestructive scanning
of an electron would run into Heisenberg.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT