From: William Pearson (email@example.com)
Date: Wed Feb 20 2008 - 14:19:46 MST
On 20/02/2008, Robin Lee Powell <firstname.lastname@example.org> wrote:
> On Wed, Feb 20, 2008 at 11:07:43AM -0800, Peter C. McCluskey wrote:
> > Presumably part of the disagreement is over the speed at which AI
> > will take off, but that can't explain the certainty with which
> > each side appears to dismiss the other.
> I disagree, actually; for me that is the entire argument. If your
> AI is mind-blind in such a way that it would drop a piano off a
> ledge without thinking to look down, that doesn't matter unless it
> gets smart enough to crack nanotech before you can stop it. The
> mere *possibility* of a hard-takeoff AI that doesn't like humans
> (through indifference or malice) terrifies me enough that I'm a firm
> backer of the FAI camp. If I didn't think hard takeoff was
> possible, I wouldn't care very much one way or the other at all,
> because if it takes decades for the AI to become super-humanly
> smart, that's decades for us to figure out that it's warped.
I agree. So would it be worthwhile to the debate to be try and
formalise what we mean by hard-takeoff or self-improvement and seeing
what the physics has to say about it?
If you accept that the rate of improvement of a learning system is
bounded by the information bandwidth into it, then we can start to put
bounds on the rates of improvement of different systems based on
energy usage and hardware (e.g. a PC with two DDR2-800 modules, each
running at 400 MHz will limit the software running on it to improving
itself to 12.8 GB/s, it's memory bandwidth, or less if you just count
the connection to the web and keyboard/mouse).
What do people think about the fruitfulness of developing this line of
When people start positing new physics that they tend to lose me. Yep,
I know our physics isn't perfect. But reasoning using the possibility
of new physics is a bit too much of a leap of faith for me.
This archive was generated by hypermail 2.1.5 : Tue May 21 2013 - 04:01:01 MDT