From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Sat Jun 29 2002 - 07:54:27 MDT
Ben Goertzel wrote:
>> SIAI is planning to make a small grant to the Novamente project which
>> will pay them to do this immediately instead of in the indefinite
> This is true... and many thanks to SIAI for this (very very) small grant
> (whose very small size is possible due to the willingness of our
> Brazilian implementation team to work for slave labor rates... since they
> want a thinking machine as badly as I do...)
> However, this initial failsafe mechanism will involve a very crude way of
> gauging "intelligence increase" and hence will be of limited utility.
> The real work is in measuring intelligence increase in a way flexible
> enough to "expect the unexpected", not in simply writing the code to shut
> things down then an appropriate "intelligence increase" trigger condition
> is reached (which is what we'll be doing shortly with this very small
> SIAI grant)...
> Left on our own, we would have waited to write the "failsafe mechanism"
> mechanics code until we had what we considered a good "intelligence
> increase measure," which will be sometime in 2003 if things go as
> planned. Creating the intelligence increase measure is an order of
> magnitude more work than writing the failsafe mechanism code...
Actually, Ben's exact words to us were these:
Ben Goertzel wrote:
> I *do* understand the value of the controlled ascent mechanism. However,
> although I think Novamente has the potential to serve as a seed AI whereas
> Eliezer disagrees, I *still* think it's premature to put such a mechanism in
> Our plan would be to put such a mechanism in the system K years down the
> road, when we had a system that we thought had a prayer of launching into
> the hard takeoff...
> Eliezer believes it's worth putting such a mechanism in NOW, to fend off the
> infinitesimal risk of an imminent hard takeoff, and I disagree. So I told
> him I would have it put in now if he'd pay for the work.
Hence the grant.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT