From: Metaqualia (metaqualia@mynichi.com)
Date: Thu Jun 03 2004 - 14:59:35 MDT
> I think it would be a mistake to ever remove humans entirely from the
process. Instead, the FAI should
>always assume that it has an *approximation* of morality, and never replace
the "territory" with a "map of
>the territory", no matter how accurate that map seems.
I agree.
What Elizier is suggesting is that the AI maps the territory, extrapolates
volcanic and tectonic activity for the centuries to come, and then uses a
map of this new future extrapolated territory: "if mountains were taller,
seas bluer, and fish had evolved to step out onto dry land, then
....<conclusion>".
How can you make this kind of forecast if you are not even sure that you
have the correct present day map of the territory, much less a complete
theory of how the map changes with time and so forth?
>extrapolations extend). The AI might extrapolate a convergence towards
37.89128412424, but should
>consider that result to be a hypothesis which can then be tested by further
interaction with humans. IMO,
this seems the safest way. Although when we speak about morality, it's more
like having one thermometer which measures distance from the moon, one that
measures distance from jerusalem, one that gives random numbers, and so
forth... we're not all slightly off, there is no mean, we're all pointing
in different directions.
mq
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:38 MST