Re: How do you know when to stop? (was Re: Why playing it safe is dangerous)

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Feb 25 2006 - 10:50:29 MST


> This is a key problem with Friendly AI, though... You have to test
> your programs to learn anything, to progress towards AI. You will
> have to have programs that learn, for you to progress towards AI. We
> may very well reach the point where we need to build self-modifying
> programs, in order to progress further towards AI, long before those
> programs are actually smart enough to be dangerous.

Yes, this is my strong suspicion...

> Computer scientists always think their programs are going to be much,
> much, MUCH smarter than they end up being. If we stop turning our
> programs on when we think they might be smart enough to be dangerous,
> we would probably be 2 decades too soon. So how are we ever to
> progress?

Progress on AGI via pure theory with minimal or no experimentation may
be possible ... but I don't think it will be the way superman AGI
comes about -- because progress with a combination of theory and
experiment will almost surely be faster, so the theorist/experimenters
are almost sure to outpace the pure theorists...

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT