From: Rolf Nelson (rolf.h.d.nelson@gmail.com)
Date: Mon Feb 11 2008 - 18:58:05 MST
Easier said than done, it's not clear to me that the advocacy approach is an
order of magnitude better, so the fact that SIAI works both the advocacy and
the implementation angles makes sense to me for now. Put another way, there
are 6 billion humans on the planet; why should someone <i>a priori</i> even
take a minute to listen to SIAI's opinions? How many people would like to
talk you, personally, into changing your mind about something important, and
what percent of them actually succeed? Keep in mind that getting someone to
half-heartedly give lip-service to Friendly AI isn't much of a win; they
have to be motivated to, at the very least, make concious design decisions
and risk slowing their project down, just for the sake of friendliness.
But yes, it's not an invalid point. Are there specific high-payoff tasks in
this area that we should be doing, but aren't?
On Feb 11, 2008 9:17 AM, Shane Legg <shane@vetta.org> wrote:
> On 10/02/2008, Rolf Nelson <rolf.h.d.nelson@gmail.com> wrote:
>
> my own estimate is that SIAI directly saves mankind at about 200:1 odds.
> >
>
> If this is the case, then it seems to suggest that SIAI should be less
> focused
> on building their own AGI, and more focused on the far more likely
> scenario
> that somebody else builds the first AGI, and SIAI tries to influence and
> guide
> the situation towards a good outcome.
>
> Shane
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT