From: David Picon Alvarez (eleuteri@myrealbox.com)
Date: Wed Feb 15 2006 - 01:38:26 MST
First, you've been rather eloquent in defending this point. I still stand
where I did, but I'll think further on it.
From: "Michael Vassar" <michaelvassar@hotmail.com>
> Truly bizarre. You assert that you believe knowledge is *always*
preferable
> to happiness, and support it with a datum showing that knowledge is
> *sometimes* preferable to happiness, e.g. in the volcano example.
I'd agree with you except for the qualifier "sometimes". I'd say knowledge
is most often good, and when it conflicts with happiness it is still often
preferable. I didn't choose that example as proof, just as illustration.
> It is in fact a trivial consequence of game theory that knowledge can make
a
> group of individuals worse off in net, or that an individual's knowledge
is
> often more costly to others than it is beneficial to himself.
I can see the second point easily enough, not sure about the first. Part of
the disagreement is rooted in my beliefs about knowledge as inherently
desireable, I suppose.
> As for the supposed dysfunction of the 6 billion or so humans who can be
> made unhappy by having knowledge (Technically, what else *could* someone
be
> made unhappy by. People are information, their unhappiness is also
> information, and is in fact, among other things, the knowledge that they
are
> unhappy) so what? Declaring people to be dysfunctional because they fail
to
Sure, people are information, everything is, or at least it looks that way
to me. Knowledge is not exactly information though, it is information that
represents something external. That the Sun is an AU away from Earth is
knowledge, that I am enjoying myself is part of my internal state. That the
Sun is 2 AU away from Earth is information, but not knowledge, it does not
represent an external truth.
> meet certain standards that you set for them is definitely not humane. It
> seems to me to be more like the vindictive preference of one of
evolution's
> creations that the environment should impose selective stresses which he
is
> atypically well equipped to handle, not unlike the hope that a strong man
in
> a tribe might hold that wars and wild beasts will come and enable him to
> translate his strength into status and differential survival relative to
his
> weaker peers.
I could always say that I don't think that's the case, that my ability or
presumed ability to deal with knowledge other people might find troublesome
has nothing to do with this view of mine, but you could always answer that
it might affect my way of thinking about the issue, and I suppose it might,
so I can't answer this objection effectively.
> A culturally benighted person, prehaps from an Islamic country, could be
> made worse off by knowing that his daughter is not a virgin. (honor
> killings)
Sure, but in the context of more knowledge this person would know that their
cultural standards are bogus, and that the virginity of their daughter is of
little importance. Limited knowledge isn't necessarily good, that much I
agree with.
> A sensitive, philosophical Westerner of Christian background, IQ 125, high
> openness, and high neuroticism could be made a great deal worse off by
> knowing that there is no reason to believe in his mythology (existential
> dispair possibly leading to clinical depression).
And thereby write some interesting Kierkegaard style philosophy of some
value.
> You can be saddened by your cursory familiarity with thermodynamics (a
> little knowledge is a dangerous thing, learn it for real, with the math,
and
> I think you will feel better), but appearently not by the Scylla and
> Charybdis of chaos and determinism which one must also pass before
becoming
> comfortable with GAI.
I can't say that my knowledge of thermodynamics is a lot more than cursory,
but I'd imagine the inevitable cessation of subjective experience in a
global scale would make some people rather unhappy. Insofar as the maths
point at this, I don't think there's much need to understand more.
> Michael Wilson could hypothetically find himself in a situation where very
> violent people very badly want him to program a seed AI for them before
> Friendlyness theory is complete. In such a situation he would not be able
> to comply with their wishes, and would be made worse off by knowing the
> details of what his friends and family were experiencing as a result.
I'm not sure about this. Knowing that one's family is being hurt can make
one unhappy, but not knowing what happens to them can do so too. Also,
arguably seing the way his family is hurt would advice him of the ethical
level of those people and help him choose for better utility not to
cooperate.
> Each of these scenarios involves people falling along a continuum of
> increasing mental resiliance, but ultimately I believe that they differ by
> degree, not by kind. A FAI would not suffer in any of these situations
> because it cannot suffer and is not a person, but I would be deeply
> skeptical of any supposedly Friendly output which rapidly lead to people
who
> were not vulnerable to the fourth scenario. (ignoring issues of
> invulnerability due to the implausibility of the situation
> post-singularity).
So, if I understand you correctly, you're saying that you would consider the
ability to detach our happiness from outcomes to others Unfriendly. This is
to say, that breaking of empathy, or conferring control to a subject of what
makes them happy, or their emotional attachments, would be Unfriendly. I
disagree here.
As I said before, though, you've raised good points, and I'll think further
on this.
--David.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT