From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Mon May 17 2004 - 19:46:40 MDT
Keith (and SL4),
All this talk about 'best interest' drives me a little nuts... but I gues
that is okay because it promts me into thinking productively
I recently discussed the Big Brother aspect of FAI with an intelligent
newbie, and the necessity of explaining alternatives and providing cogent
examples lead me to greater thought on the subject. My earlier conclusions,
that volitional morality will be very important to our freedom and safety in
the future, have gained even more validity as a result of more detailed
thinking on the issue. Also this thread, and the discussion of FAIs
estimating 'best interest', has driven me to articulate in greater detail
why having FAIs adhere to volitional morality as closely as possible would
be a good idea.
The problem with the idea of 'best interest' in relation to a volitional
being, is not in its estimation, but in the resulting actions that some
suggest would be needed/wanted/rational/necessary. When someone opines that
I will not be allowed to do something because it is not in my 'best
interest' then claxons begin sounding in my head. When someone elaborates
further and suggests that other's interests might outweigh my own in some
arbitrary decision, I hallucinate strobing red lights. When another 'pipes
in' that the problem may not be solvable, all hell breaks loose upstairs,
and I imagine frightened sailors running through corridors and closing
bulkheads.
It should go without saying that a superintelligent Friendly being is going
to know what is in our 'best interest' better than we do. But so what?
That does not mean, and should not mean that we cede personal decisions to
the FAI.
In so far as a person's decisions/actions are morally positive or morally
neutral (see definition in previous post:
http://www.sl4.org/archive/0405/8491.html) ...
The assessment of 'best interest' can be correctly used by an FAI to decide
action in regards to a individual volitional being, a person... that action
should not be to overpower the person "for their own good" as that would be
a morally negative act. The action could instead be persuasive.
non-invasive and helpful - in a way only an FAI-like being could be. But
the decision, the volition should remain with the person, not be usurped by
the FAI.
When the assessment of 'best interest' is made for a group rather than an
individual, then that information can also be used to decide action in
regards to that group, or individuals within it. But again, that action
should not be to overpower any one of the individuals, or the group as a
whole "for their own good". This would make manifest the feared idea of Big
Brother watching and controlling everything, a hugely morally negative
situation. The morally positive action an FAI might take could again be
persuasive, non-invasive and helpful - advice, not force. And again, each
individual would be and should be responsible for their own decisions and
actions.
As to the problem not being solvable, that is hyperbole. Humans have been
successfully solving the problem of conflicting wants & needs for millennia.
Sure, sometimes primitive mental programming gets switched on and all hell
breaks loose (competition for mates, war, etc.) but most of the time people
trade, negotiate, make deals, treaties and agreements - they sometimes even
come up with win-win solutions (gasp!). If mere humans can solve this
problem, and on occasion solve it well, then an FAI should not find it too
difficult to facilitate.
As for there being a single, universal 'best interest' - that item just
doesn't exist. Each volitional being decides for itself what is its 'best
interest', and that evaluation is in constant flux. It is part-and-parcel
of being a conscious, intelligent being.
In so far as a person's decisions/actions are morally negative... well, that
is a whole-nother post.
Keith, when you wrote: "...understanding these [Ev.Psyc.] matters might be
essential to providing the environment in which friendly AI can be
developed."
-- Sort of. It is not the environment that will be improved, but the
accuracy of the FAI's human-cognition model. It is very important that an
FAI understand the ways in which human think so that it can better model the
future, and better understand the human-generated data that will be
presented to it. It is not enough for an FAI to determine that Johnny
behaves with an approximation to Bayesian rationality 82.6% of the time.
FAI needs to know what Johnny is mentally doing the other 17.4%, and why,
and in what situations his cognition is likely to switch between modes.
Be well.
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT