From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Jun 01 2004 - 11:26:13 MDT
Norm Wilson wrote:
> Eliezer Yudkowsky wrote:
>
>> Self-determination is not the only criterion of an acceptable outcome.
>
> Is self-determination a criterion of an acceptable outcome? Sounds like
> speculation about the outcome to me :)
True enough. Sigh. That's the problem with allowing myself speculation,
even if it isn't at the expense of helping; it's too easy to lose track of
the distinction between that and reality.
>> It seems to me that an FAI can make huge improvements to background
>> rules before that starts interfering with self-determination.
>
> Who are we to assume that an FAI will value our self-determination at
> all, unless we make it an invariant?
Conceded, and thanks for reminding me.
> For that matter, the idea that collective volition is a correct path to
> morality is arguably itself a moral judgment. By making it an
> invariant, you run the risk of imposing moral content on the FAI?s
> morality-seeking structure.
>
> While I like the idea of specifying friendliness "structure" over its
> "content", I think it will be difficult to introduce invariants that
> don't themselves have implicit moral content.
There's a *lot* of implicit moral content in collective volition - why,
half the paper is about the implicit moral content of collective volition!
How else would I select that solution from solution space? The point is
that it's rewritable moral content if the moral content is not what we
want, which I view as an important moral point; that it gives humanity a
vote rather than just me, which is another important moral point to me
personally; and so on.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:38 MST