Re: Is a theory of hard take off possible? Re: Investing in FAI research: now vs. later

From: Krekoski Ross (rosskrekoski@gmail.com)
Date: Wed Feb 20 2008 - 15:44:44 MST


This is an excellent argument and one that has been neglected, IMO in much
discussion of hard takeoff.

I think a great deal of the 'fear' surrounding such a scenario is in the
prototypical 'skynet' scenario whereby an AI is networked and has a much
less limited architectural base with which to self-improve.

I would also probably add though that we should distinguish between a lab
type AI taking off, where the above constraint is quite a realistic one
since it exists in a more or less isolated environment, and a hypothetical
bot-net type AI, which is admittedly much further off but would be less
bound by such constraints.

I also tend to agree with the confusion surrounding 'new physics' we need to
define the discussion a bit more succinctly.

Ross Krekoski

On Wed, Feb 20, 2008 at 9:19 PM, William Pearson <wil.pearson@gmail.com>
wrote:

> On 20/02/2008, Robin Lee Powell <rlpowell@digitalkingdom.org> wrote:
> > On Wed, Feb 20, 2008 at 11:07:43AM -0800, Peter C. McCluskey wrote:
> > > Presumably part of the disagreement is over the speed at which AI
> > > will take off, but that can't explain the certainty with which
> > > each side appears to dismiss the other.
> >
> >
> > I disagree, actually; for me that is the entire argument. If your
> > AI is mind-blind in such a way that it would drop a piano off a
> > ledge without thinking to look down, that doesn't matter unless it
> > gets smart enough to crack nanotech before you can stop it. The
> > mere *possibility* of a hard-takeoff AI that doesn't like humans
> > (through indifference or malice) terrifies me enough that I'm a firm
> > backer of the FAI camp. If I didn't think hard takeoff was
> > possible, I wouldn't care very much one way or the other at all,
> > because if it takes decades for the AI to become super-humanly
> > smart, that's decades for us to figure out that it's warped.
> >
>
> I agree. So would it be worthwhile to the debate to be try and
> formalise what we mean by hard-takeoff or self-improvement and seeing
> what the physics has to say about it?
>
> If you accept that the rate of improvement of a learning system is
> bounded by the information bandwidth into it, then we can start to put
> bounds on the rates of improvement of different systems based on
> energy usage and hardware (e.g. a PC with two DDR2-800 modules, each
> running at 400 MHz will limit the software running on it to improving
> itself to 12.8 GB/s, it's memory bandwidth, or less if you just count
> the connection to the web and keyboard/mouse).
>
> What do people think about the fruitfulness of developing this line of
> thought further?
>
> When people start positing new physics that they tend to lose me. Yep,
> I know our physics isn't perfect. But reasoning using the possibility
> of new physics is a bit too much of a leap of faith for me.
>
> Will Pearson
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT