From: Norman Noman (overturnedchair@gmail.com)
Date: Thu Nov 22 2007 - 14:19:44 MST
There is no system of value or right and wrong built into the fabric of
reality, ethics don't "exist" in any sense more than favorite colors exist.
But we all have things we want and things we don't want. These form
arbitrary personal ethics, which are neither given weight because they are
personal or negated because they are arbitrary.
Because many of our values are the same, and because of the advantages of
cooperation, we invent collective systems of right and wrong that we hold
each other to. These systems are not arbitrary, but they are derived from
the fundamentally arbitrary values of everyone involved.
Arguments about ethics can thus take place on two levels. The personal level
amounts to no more than stating what you want, and the societal level
amounts to an agreement made that benefits the participants.
The problem is, people almost always confuse the two levels, and until
they're untangled, any argument about ethics is stuck in a whirlpool.
It would be all too easy to use this as a segue into saying "As I see it the
point of CEV is to come up with the best possible system of the second type
that can be constructed for humanity" but this in fact is NOT the point of
CEV, or even what I would want the point to be.
And even if it was, arguing about what we think CEV is supposed to do, or
what we think it should do, strikes me as rather the wrong thing to argue
about. What matters is what it WILL do.
I don't have a problem with what CEV does, because I don't KNOW what it
does.
What I want is a more detailed model of the implementation. I'm not asking
for math (although that would certainly be nice), just clarity.
In a now-dead thread on the SI blog, Eliezer wrote:
[T]he part of the extrapolation "if we knew more" is not an extrapolation of
> our responses to evidence, but an extrapolation of the substitution of the
> AI's probability distribution for our own probability distribution. It is
> ourselves if we anticipated future experiences correctly to the limits of
> the AI's knowledge. Furthermore, modeled the world correctly to the limits
> of the AI's model and the limits of our ability to react emotionally to
> elements of that model. In the order of evaluation, this substitution would
> occur before moving onto such considerably more complicated and recursive
> processes of "more the people we wished we were" or "had grown up further
> together".
>
According to the poetry, "knew more" etc. is to be "interpreted as we wish
that interpreted, extrapolated as we wish that extrapolated." DOES "knew
more" happen first, interpreted in the explicit manner you describe, or is
it "interpreted as we wish it interpreted", and if so, how is THAT
interpreted?
It would be really nice if you could break down the order of operations, and
define them more explicitly.
Thank you.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT