From: Randall Randall (randall@randallsquared.com)
Date: Mon May 17 2004 - 22:32:37 MDT
On May 17, 2004, at 9:46 PM, Michael Roy Ames wrote:
>
> It should go without saying that a superintelligent Friendly being is
> going
> to know what is in our 'best interest' better than we do.
While a being intelligent enough to simulate a human may be able to
determine what is in that human's "best interest" better than the
human, it seems quite unlikely that existing humans will stay at
current-day human-level intelligence long enough for this to be useful.
Further, solving the problem of "best interest" for a *society*
involves a textbook combinatorial explosion, which suggests it
may not be solvable at all, unless you are using "solved" to refer
to how humans have been doing so far, including wars, purges, etc.
It certainly does not "go without saying" that superintelligence
solves the problem of complexity, here.
-- Randall Randall <randall@randallsquared.com> 'I say we put up a huge sign next to the Sun that says "You must be at least this big (insert huge red line) to ride this ride".' -- tghdrdeath@hotmail.com
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST