From: Samantha Atkins (firstname.lastname@example.org)
Date: Tue Jun 20 2006 - 20:44:22 MDT
On Jun 19, 2006, at 2:12 PM, H C wrote:
> Optimally though, we wouldn't want anybody to develop an AGI until
> Friendliness theory can tell us that we are definitely not going to
> explode the Universe by flipping the 'on' switch.
This is only optimal if we can survive with more limited (than AGI)
intelligence in the interim. It is not clear to me that we can.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT