From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jul 19 2005 - 15:18:43 MDT
Russell Wallace wrote:
> On 7/18/05, Philip Sutton <Philip.Sutton@green-innovations.asn.au> wrote:
>
>>I reckon we should start from the perspective of a person who is advising
>>the makers of AGIs in another galaxy. What friendliness goals would we
>>recommend that they adopt?
>
>
> Well, whatever they choose to do to themselves, what I'd want them to
> do regarding us is simply leave us alone, at least until we're at the
> point where we can talk to them at their own level. That makes sense
> to me as a policy for our FAI if we manage to build one: if hostile
> aliens are encountered, fight back, but if non-hostile intelligent
> life is encountered, leave it alone unless/until it gets to the same
> level as us.
That would explain the Fermi Paradox. But would you really want aliens to
permit the Holocaust? Would you let it occur on some alien world, if you
could see, and knew, and had the power to stop it? If third parties stepped
in to help those who wished help, would you fight to stop them? If there is
any force that interdicts our world, preventing even others from helping us as
we would wish to be helped, then I cannot consider them as friends. It goes
back to the disturbing question of aliens with alien motives successfully
constructing FAI, a useless hypothesis that explains anything and everything
and makes no further predictions even if it's true.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT