Re: Risk, Reward, and Human Enhancement

From: Byrne Hobart (
Date: Thu Dec 06 2007 - 08:28:41 MST

> How do you determine whether the gain from making one person much
> smarter outweighs the loss from making the rest of them marginally
> dumber?

My thinking was that FAI is likely to be the result of a collective
effort, but that it's going to require at least one utterly brilliant
thinker, and that the advantage to having *the* smartest person, rather than
many people in the top 1%, would be high enough to justify a sacrifice. But
it's part of a broader question: is the Singularity beneficial enough that
we ought to accept a risk of massive harm to make it happen?

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT