From: Damien Broderick (firstname.lastname@example.org)
Date: Tue Jun 01 2004 - 12:46:44 MDT
Interesting if somewhat baggy paper. Thanks for posting it!
At 01:26 PM 6/1/2004 -0400, Eliezer wrote:
>The point is that it's rewritable moral content if the moral content is
>not what we want, which I view as an important moral point; that it gives
>humanity a vote rather than just me, which is another important moral
>point to me personally; and so on.
My sense was that it gives the system's [mysteriously arrived at] estimate
of humanity's [mysteriously arrived at] optimal vote. This, as Aubrey
pointed out, is very different, and critical.
By the way, two fragments of the paper stung my attention:
< The dynamics will be choices of mathematical viewpoint, computer
programs, optimization targets, reinforcement criteria, and AI training
games with teams of agents manipulating billiard balls. >
I'd like to see some. And:
< The technical side of Friendly AI is not discussed here. The technical
side of Friendly AI is hard and requires, like, actual math and stuff. (Not
as much math as I'd like, but yes, there is now math involved.) >
I'd still like to see some.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT