From: Thomas McCabe (pphysics141@gmail.com)
Date: Wed Mar 12 2008 - 21:48:16 MDT
On Wed, Mar 12, 2008 at 11:05 PM, Mark Waser <mwaser@cox.net> wrote:
> > Try reading http://yudkowsky.net/singularity.html to get an idea of
> > the potential power behind AGI. Note that this paper was originally
> > written in 1996.
>
> Please assume that I have read (several times) and assimilated all of the
> papers on the SIAI and Yudkowsky websites (unless you can see specific
> points that you believe that I have missed). I am not so arrogant that I
> would have done something like this without attempting to assimilate all of
> the necessary background information. *No one* is good enough to do that.
>
>
> > This is the vast majority of systems. In general, there are going to
> > be many more simple systems than complex systems, because each
> > additional bit of complexity requires additional optimization power.
> > This is the principle behind Solomonoff induction.
>
> Of course. The vast majority of systems are going to be simple but they
> will also be unintelligent. The intelligent systems are going to be complex
> and have many goals (or else all the effort in making them complex was
> wasted).
>
>
Complexity(intelligence) < Complexity(intelligence + complicated goal
system) < Complexity(intelligence + Friendly goal system). As
complexity increases, prior probability drops off *very* fast. 10 bits
of complexity = factor of 1,000 decrease in prior probability.
-- - Tom http://www.acceleratingfuture.com/tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT