From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue May 24 2005 - 17:22:32 MDT
Peter C. McCluskey wrote:
> Eric Baum
> makes some moderately strong arguments in his book What is Thought?
> against your claim. If your plans depend on a hard takeoff and your
> reasons for expecting a hard takeoff are no better than the ones I've
> run across, then I'm pretty certain you will fail.
Eric Baum calculates 10^36 operations to get intelligence, based on the number
of organisms that have ever lived. To see why this number is wrong you may
consult http://dspace.dial.pipex.com/jcollie/sle/ or for more information
George Williams's "Adaptation and Natural Selection."
Have you read http://intelligence.org/LOGI/seedAI.html? I ask for purposes of
information.
> What kind of goal system do you plan to have built into the RPOP while
> it is computing CV? Presumably you see the risk that it will take over
> the world before you get around to using any results from the CV
> simulations. Yet I can't see any indication of how well you would be
> able to handle this risk.
You'd build a temporary goal system marked reflectively as an approximation to
CEV (collective extrapolated volition). You can toss as much safety as you
want into the temporary goal system, and the safeties will stay there until
the CEV is defined enough to execute a complete rewrite and wash out all the
ornamentation and tinsel. I don't think humans could build an AI that had no
goal system at all until it was already a superintelligence.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:56 MST