From: Jef Allbright (jef@jefallbright.net)
Date: Sun Sep 19 2004 - 12:02:29 MDT
Michael Wilson wrote:
>Though speculation about post-Singularity development trajectories is
>usually futile, my recent research has thrown up a serious moral issue
>which I believe has important implications for CV. The basic points are
>that a normative method of reasoning exists, it and close approximations
>thereof are tremendously powerful and any self-improving rational
>intelligence (artificial or upload) will eventually converge to this
>architecture unless their utility function explicitly prevents this
>action.
>
>The problem here is that just about all the human qualities we care about
>are actually the result of serious flaws in our cognitive architecture,...
>
>
--- snip ---
>This issue is ultimately a comprehension gap; a universe of perfect
>rationalists might well be rated as valuable inhabitants, but we have no
>way of mapping our conception of worthwhile and desirable onto this
>basically alien assessment. Along the wild ride that has constituted my
>seed AI research to date, my original engineering attitude (focus on
>practical stuff that works, everything can be fixed with enough technology)
>has had to expand to acknowledge the value of both the abstract (normative
>reasoning theory and relevant cosmology) and the humanist (despite all the
>hard maths and stuff you have to cover just to avoid disaster, Friendliness
>ultimately comes down to a question of what sort of universe we want to
>live in).
>
>
>
Michael, you raise some interesting points, but your analysis didn't
take into account the dynamic, ever-evolving nature of the universe we
find ourselves in. A rational approach, as this term is commonly
construed, requires effectively complete data, and sufficient time to
process it. As the game escalates with accelerating technology, there
will be an increasing variety of challenges in the competitive
environment, and all strategies, rational as they may be, will
necessarily lag.
At the highest conceptual level, all actions are rational, but at any
lower level of context, an effective intentional approach to the
challenges of life involves a mix of rational analysis and heuristic
"going with the flow" (which, in the absence of sufficient information,
is wise since it's based on previously successful strategies that are
likely to be better than a partially thought-out attempt at
rationality.) The economics of the situation do not allow a flat
normative solution over the varied and changing landscape in which we
find ourselves now, and there is even less applicability when we try to
extrapolate today's knowledge and strategies to the far future (which is
not very far.)
The best we can do is apply a form of "bounded rationality" where we
apply current knowledge and strategies, however incomplete but growing,
to an increasingly diverse environment. Rather than try to extrapolate
from what we know now and then normalize, we must study the
ever-changing rules of the game and then optimize. In doing so, we
understand that we can not know where we are ultimately heading, but
that we are influencing the direction of the journey according to our
values.
That said, the far future environment and challenges will have little
relationship with current human values and concerns. If such things are
important to an individual then perhaps a protected mini-environment is
a solution. In my opinion, the bigger game is the only one worth
playing, with the players growing with the game.
- Jef
http://www.jefallbright.net
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:46 MST