Re: Ethics

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sun Feb 06 2005 - 21:24:52 MST


Thomas Buckner wrote:
>
> As I do not expect the authoritarians to give up
> power, legally or not, peacefully or not, and as
> they now have surveillance and armament
> sufficient to put down any opposition I can
> envision, I do not anticipate a random or
> open-source system here. Only a transhuman or SAI
> can undo what has been done. Even then, it will
> not necessarily matter who 'should' impose hir
> decisions on the rest; it will only be a question
> of who can.

I don't think it's true that *only* a transhuman or SI can fix that
particular facet of the mess of human politics. You described what looks
to me like a humanly solvable problem. I would prefer to discourage that
unnecessarily apocalyptic thinking which cries that mere political messes,
of the sort that have gone on for hundreds nay thousands of years, are so
awful that they require an FAI to solve.

The planetary death rate is so awful that it calls for an FAI to solve.

The problem of UFAI is even worse.

Politics, in the United States and other malfunctioning modern liberal
democracies, really does not seem to me to have achieved that level of
awfulness.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT