Re: Risk, Reward, and Human Enhancement
From: Byrne Hobart (sometimesfunnyalwaysright@gmail.com)
Date: Thu Dec 06 2007 - 08:28:41 MST
- Next message: Matt Mahoney: "Re: Risk, Reward, and Human Enhancement"
- Previous message: Stathis Papaioannou: "Re: Risk, Reward, and Human Enhancement"
- In reply to: Stathis Papaioannou: "Re: Risk, Reward, and Human Enhancement"
- Next in thread: Matt Mahoney: "Re: Risk, Reward, and Human Enhancement"
- Reply: Matt Mahoney: "Re: Risk, Reward, and Human Enhancement"
- Reply: John K Clark: "Re: Risk, Reward, and Human Enhancement."
- Messages sorted by:
[ date ]
[ thread ]
[ subject ]
[ author ]
[ attachment ]
> How do you determine whether the gain from making one person much
> smarter outweighs the loss from making the rest of them marginally
> dumber?
My thinking was that FAI is likely to be the result of a collective
effort, but that it's going to require at least one utterly brilliant
thinker, and that the advantage to having *the* smartest person, rather than
many people in the top 1%, would be high enough to justify a sacrifice. But
it's part of a broader question: is the Singularity beneficial enough that
we ought to accept a risk of massive harm to make it happen?
- Next message: Matt Mahoney: "Re: Risk, Reward, and Human Enhancement"
- Previous message: Stathis Papaioannou: "Re: Risk, Reward, and Human Enhancement"
- In reply to: Stathis Papaioannou: "Re: Risk, Reward, and Human Enhancement"
- Next in thread: Matt Mahoney: "Re: Risk, Reward, and Human Enhancement"
- Reply: Matt Mahoney: "Re: Risk, Reward, and Human Enhancement"
- Reply: John K Clark: "Re: Risk, Reward, and Human Enhancement."
- Messages sorted by:
[ date ]
[ thread ]
[ subject ]
[ author ]
[ attachment ]
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:01:01 MDT