From: Daniel Radetsky (daniel@radray.us)
Date: Tue Mar 08 2005 - 14:47:46 MST
On Mon, 07 Mar 2005 22:55:29 -0800
"Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
> > - Human code is highly modular, to the detriment of performance. By this and
> > the above, humans have a small short-term memory.
>
> But the last item will be available, and it and other structural cues
> are sufficient information (given sufficient computing power) to deduce
> that humans are fallible, quite possibly even that humans evolved by
> natural selection.
I don't see why you believe that there will be that much there to find, or that
*any* AI would have to have the right kind of background knowledge to make that
inference. Computing power is not a catch-all; you need facts too.
> We know a tremendous amount about natural selection on the basis of (a)
> looking at its handiwork (b) thinking about the math of evolutionary
> biology that systematizes the evidence, given the nudge from the evidence.
No doubt, but we wouldn't have anything without (a).
> One can equally well imagine a Minerva superintelligence (that is, a
> fully functional transhuman AGI created as pure code without any
> real-time interaction between the programmer and running AGI code) that
> studies its own archived original source code. This is a smaller corpus
> than terrestrial biology, but studiable in far greater detail and much
> more tractable to its intelligence than is DNA to our own intelligence.
I don't see what your reasons for this are. Why do you think a few megs of
source code (or machine code, which is what it would probably actually want to
work with) have enough information for such a powerful inference?
> I would not be surprised to see the Minerva SI devise a theory of
> human intelligence that was correct on all major points and sufficient
> to manipulation.
I would be shocked.
> The point is probably irrelevant. A Minerva FAI seems to me to be three
> sigma out beyond humanly impossible. I can just barely conceive of a
> Minerva AGI, but I would still call it humanly impossible.
You are probably far more qualified than me to say whether or not a Minerva AI
is feasible. But consider that AI in general is very dangerous, and the
inability to use an AI jail adds to that tremendously. If it were true that a
Minerva could be safely jailed, then that would be a very good reason to
carefully evaluate just how impractical it was, and whether there were any good
ways to make it more practical.
Yours,
Daniel
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:54 MST