From: Samantha Atkins (email@example.com)
Date: Sat May 22 2004 - 00:56:06 MDT
On May 21, 2004, at 1:28 AM, Eliezer Yudkowsky wrote:
> I think I have to side with Keith. I fear that human
> self-modification is far more dangerous than I would once have liked
> to imagine. Better to devise nutritious bacon, cheese, chocolate, and
> wine, than dare to mess with hunger - let alone anything more complex.
> You would practically need to be a Friendly AI programmer just to
> realize how afraid you needed to be, and freeze solid until there was
> an AI midwife at hand to help you *very slowly* start to make
> modifications that didn't have huge unintended consequences, or take
> you away from the rest of humanity, or destroy complexity you would
> have preferred to keep.
The above contains the perhaps fatal assumption that we are capable
without augmentation of building that AI midwife. We will start
slowly by utter necessity of not being that terribly bright or capable.
We ourselves with a bit of luck will be brighter and more capable by
the time each level of possibility is within our grasp. It is a
dangerous game but the game is already afoot.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT