From: Gordon Worley (redbird@rbisland.cx)
Date: Wed Aug 01 2001 - 08:16:13 MDT
At 2:02 PM +0200 8/1/01, Joaquim Almgren Gāndara wrote:
>As time goes on, it will be harder, I agree. So hard, in fact, that it
>might prove to be impossible for humans to create an intelligence
>seven times smarter than themselves. Of course, if we can get the
>positive feedback loop started, there's no telling how intelligent an
>AI can get. But how do we start it if the AI takes one look at its own
>code and just gives up?
If this happens (I consider it unlikely, but we just might create an
AI novelist rather than computer scientist by mistake), then what we
do is write an AI tool that only knows how to improve code. It may
not be very smart about it (e.g. it may have to do find new
algorithms genetically), but it will work on the code for the AI and
hopefully get it up to where ve can mod vis own code. Of course, the
improve code AI is going to need to work on itself, too, but
hopefully not too much (viz. we get the main AI up to improving vis
own code).
-- Gordon Worley `When I use a word,' Humpty Dumpty http://www.rbisland.cx/ said, `it means just what I choose redbird@rbisland.cx it to mean--neither more nor less.' PGP: 0xBBD3B003 --Lewis Carroll
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT