Re: Volitional Morality and Action Judgement

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sat May 29 2004 - 19:30:37 MDT


Mark Waser wrote:
>
> I fully agree with this statement; however, I'm seeing a lot of debate
> between Eliezer and Ben where it's devolved to the point where Eliezer is no
> longer willing to fully engage with valid points. Once Eliezer is no longer
> willing to engage with an individual who is probably closest to him in terms
> of understanding/drive/etc., then, <italicized>in my way of looking at
> things</italics> Eliezer has forfeited a huge chunk of his
> responsibility/moral authority/effectiveness/whatever.

Ben is nowhere remotely near the person closest to me in terms of
understanding. People to whom I can try speaking about the technical side
of Friendliness include Michael Wilson and James Rogers, both of whom have
received their Bayesian enlightenment (what James Rogers calls "becoming
one who deeply understands algorithmic information theory"). Ben's a nice
guy but he isn't on that list. A relaxed outlook on life and existential
risks is not close to me in terms of "drive", either.

If you want to know whether I've gone nuts, I would suggest privately
asking James Rogers. Michael Wilson and Michael Raimondi are both capable
of making the judgment, but their judgment is already known to me, and so
it is not a fair test. (Michael Wilson thinks I'm evil and insane, but in
a healthy, competent, benevolent way.)

I do intend to resume the "external reference semantics" thread, subject
appropriately changed to "FAI: external reference semantics", if I can get
the time. Right now I'm in the middle of working on a final draft of a
quick update to Friendliness called "Collective Volition", and have at
least three other things on my plate, so it may not happen. I do see it as
a legitimate challenge, so I wouldn't argue if you say that I lost serious
points for not responding. Happens, though.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT