From: Mark Waser (firstname.lastname@example.org)
Date: Wed May 26 2004 - 19:07:48 MDT
----- Original Message -----
From: "Eliezer Yudkowsky" <email@example.com>
> Ben? Put yourself in my shoes for a moment and ask yourself the question:
> "How do I prove to a medieval alchemist that there is no way to concoct
> an immortality serum by mixing random chemicals together?"
Wow. That's REALLY arrogant. Of course, it does fit well with your current
1. Everything is too dangerous to attempt until it has been fully thought
2. You're thinking it out but can't explain it (both due to lack of time
and because someone else might use your explanation improperly)
3. We should just trust that you'll figure it out and single-handedly save
Did I get that all correct?
My belief is that YOU took a wrong turn very early in the process and are
rabbit-holing at an amazing rate. Relying upon a single point of failure
(meaning both a single FAI and a single you) is incredibly foolish. The
best way to ensure that an AI is truly friendly is to make it social, make
it realize that it is imperfect and capable of mistakes and bad decisions,
and have it realize that the best way to avoid mistakes is to consult (and
work with) others. Of course, since you don't exhibit these behaviors . . .
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT