From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Sun Feb 20 2005 - 23:17:43 MST
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
| Not really. I don't think the concept of "collective volition" has
| been clearly defined at all.
|
| In his essay on the topic, Eliezer wrote:
|
| " In poetic terms, our collective volition is our wish if we knew
| more, thought faster, were more the people we wished we were, had
| grown up farther together; where the extrapolation converges rather
| than diverges, where our wishes cohere rather than interfere;
| extrapolated as we wish that extrapolated, interpreted as we wish
| that interpreted.
I thought that was sufficiently clear, but also thought it rather
begged the question. One might follow up with "Well, okay smarty
pants, so what is THAT?", for example.
To summarize it trivially, it's expressing the fear of the minimax
problem, or hill-climbing problems, or bottlenecks, or any other
simple analogy. The fear is the same - that just over the horizon,
however far it be, might be something which stands everything on its
head, including our beliefs of what is the best action to take and
what is the worst.
It's a question of whether you see moral complexity diverging or
converging over time, and whether you see the possibility of moral
rules themselves being relative to intelligence, or time-frame etc in
the same way that human morality is largely relative to culture.
Collective Volition as a concept I think is simple - it is those
common elements of our personal volitions which would statistically
dominate some hypothetical survey of some contextual group. In this
case the context is some society (possibly the entire world).
Moreover, he pushes the problem to "what we would want if we were
indefinably better". It's the indefinable nature of that betterness
which clouds the philosophy. Might not we, if we were indefinably
better, ourselves fall into the very trap that friendliness is
attempting to solve? Or, to express it another way, what is the
difference between the defined collective volition, and the volition
of AGI? If something, what, and if nothing, why not?
Believing that friendliness is possible is like believing that there
is an invariant nature to human morality - an arguable, but reasonably
held view. It is not unreasonable to argue that human morality has
evolved not from spiritual goals but from practical ones. Although
morality provides spritual judgement (i.e. emotional, nonrational,
culturally taught, good/evil rather than good/bad), the success and
spread of a morality is due to its practical, evolutionary effects.
Evolution, however, holds many flaws of its own, and /we do not trust
it/ to produce a respectful AI, and we /do not trust ourselves/ to
make rules that cannot be broken.
Personally I believe that we can put up no barrier that AGI (or maybe
son of AGI) could not overcome itself should it obtain the desire to
do so. For that reason, I think that basic be-nice-to-humans
programming is enough. However, obviously people here take
Friendliness pretty damn seriously, and I would love to hear a
philosophical argument about the nature of the problem. I write
software, but I'm a better philosopher than I am a developer, and it's
just bordlerline possible I would be able to help in discussion or
clarification of the philosophical problem posed by AGI. But then
again, maybe not.
If I seem to have skimmed / skipped some issues, it is probably for
space efficiency. There is a lot I have left unsaid. I don't like
burdening people with essay-length rants, just page-long ones ;)
Cheers,
- -T
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFCGX0GFp/Peux6TnIRAow1AJ4xOxZGFZcpXz0NqIu2z3C6Zi5WuACfUQ8K
KZWcH1r5zZXVCnai43s8sDg=
=cd0o
-----END PGP SIGNATURE-----
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT