From: Ben Goertzel (ben@goertzel.org)
Date: Thu Jan 08 2004 - 17:36:18 MST
Eliezer,
It might be significantly easier to engineer an AI with a 20% or 1% (say)
chance of being Friendly, than to engineer one with a 99.99% chance of being
Friendly. If this is the case, then the broad-physical-dispersal approach
that I suggested makes sense.
A better analogy than sending out monkeys and hoping they'll type
Shakespeare, is sending out a bunch of smart genetically engineered kids
hoping one will turn into something as remarkable as (though not identical
to) Shakespeare.
-- Ben G
> Ben Goertzel wrote:
> >
> > -- a seed AI, triggered to awaken in N years and start evolving toward
> > friendly superintelligence
> > -- a simulated world for the seed AI to play in
> > -- a universe of virtual humans embodied in the simulated world
> >
> > Statistically, some percentage of these AI's will become friendly
>
> And statistically, some number of times I put my hand on my desk, my hand
> will tunnel through the potential energy barrier. That there is a
> statistical chance of something doesn't mean you can afford enough
> instances to raise the total probability to something significantly
> different from zero - not without the exercise of the same skills that
> would be required to make a single instance work.
>
> Let's send out enough monkeys, maybe one of them will type Shakespeare.
>
> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT