From: James Higgins (jameshiggins@earthlink.net)
Date: Wed Aug 01 2001 - 19:02:17 MDT
At 07:10 PM 8/1/2001 +0200, you wrote:
>Jimmy Wales:
> > If even billions and billions of scientists, working together in an
>incredibly
> > high speed networked fashion, working 24/7 can't achieve anything
>more intelligent
> > than we were able to do, with only several million scientist
>man-hours, I'll be
> > very surprised. Wouldn't you?
>
>I would. However, I don't like the fact that the only thing we can
>come up with are brute-force solutions. Isn't this what Eliezer
>abandoned quite a while ago? I seem to remember that he wrote
>somewhere that he used to think that you would need lots and lots of
>researchers to reach human-level AI ~simply because he didn't know how
>to do it~. We've only moved the horizon a bit. Now we're saying that
>once we reach human-level AI, we use brute force to get to trans-human
>AI. This worries me.
I don't think this should worry you. Think of it as the emergency backup
plan. If all else fails, run the AI researcher software on a billion
computers and see what it comes up with. Given enough time and resources I
believe even a million or less human equivalent scientists could solve
virtually any scientific problem. If this were our worse case scenario
(and I don't think it is) I'd be having a celebration right now!
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT