From: Joshua Fox (joshua@joshuafox.com)
Date: Mon Nov 12 2007 - 00:11:42 MST
Eliezer Yudkowsky wrote:
> Because unless you narrowly restrict the available options to Tit for
> Tat like behavior, it's too hard. You can't get simulation of general
> consequentialist reasoning without general intelligence. Never mind
> simulations of tribal alliance formation and linguistic political
> persuasion. This is very advanced stuff, cognitively speaking.
But surely, any complex system can be simplified in a model. The model
would not be as good as the real thing, but toy examples often do
teach something. I am also suggesting that as a first easier first
stage, we just evaluate the morality of agents, rather than driving
their decisions.
Here's something that's technically feasible (although perhaps not
practically): Have disinterested observers spy on MMORPG or Second
Life characters and evaluate the morality of their actions
numerically. Imagine a number hovering over each character giving the
average morality rating. Now have some software do this using various
morality functions.
Joshua
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT