From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Mon Feb 21 2005 - 20:35:24 MST
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
It remains unclear to me what "RPOP" is an acronym for; Google has no
suggestions.
|> If the end point of evolutions is not sentient we are screwed, if
|> it is sentient we are safe, subject on both sides to the
|> vaguaries of horizon problems. This is a truism if you believe
|> that all evolutionary paths are eventually explored. Evolution is
|> not a circumventable process, we can only do our best to build a
|> fittest organism which is interesting rather than not.
|
|
| I'm not sure about this... maybe you're right, in which case we're
| toast. But I think this one _is_ a matter of probability.
|
| However, I'm going to suggest an equivalent of Pascal's Wager: If
| evolution can't be circumvented, it doesn't matter what we do for
| good or bad. If it can, then it does matter what we do. So I put it
| to you that we should act on the assumption that I'm right and it's
| both possible and necessary to circumvent evolution.
|
| - Russell
Your argument :
1) If Friendliness is a fitter being, everything will be fine
2) If mindless nanotech is a fitter being, we're screwed
3) If (1) is true, then we are fine if we pursue Friendliness
4) If (2) is true, then we are screwed, but you maintain hope that we
could work to not be screwed.
Your argument is the counterpoint to my argument, which I shall
express in my terms.
It's a horizon thing. If we can get to friendliness FIRST, then
evolution might not explore mindlessness. Evolution is a tool to be
harnessed, not one to be circumvented. The trick is not to assume that
we need to work against AI, and brainstorm a way of making a moral
invariant that can survive a chaotic system, but to understand the
system such that there will be a natural convergence towards Friendliness.
It's not about 'escaping' evolutionary pressure - that is like saying
that everything would be easier if the laws of the universe were
different. Survival of the fittest /will/ happen. We need to ensure
that something interesting is the fittest thing.
This came about due to my claim that
| Survival of the fittest is how I got here, and damned if I'm going
| to starve to death for the sake of some rats. I think it's fair
| enough to apply the same standard to something smarter and higher
| on the food chain.
Your response was that you were worried about being out-competed by
nanobots, and so we need to build in some kind of invariant morality
into AGI in order that AGI doesn't evolve itself into nanobots due to
some screwball horizon problem in its reward mechanism that we didn't see.
My argument is that evolution can be a tool, and that we shouldn't try
to pretend that it can be circumvented. If we 'nobble' the AGI into
putting human existence ahead of self-existence, then perhaps that is
precisely the edge that the evil nanobots need in order to take over
the universe?
Oh, and I read some of the mail archive, and ended up on a page by Ben
Goetzel talking about "Solomonoff Induction". Is anyone interested in
my pointing out some implied assumptions I found in there, or has this
already been done to death?
Cheers,
- -T
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFCGqh8Fp/Peux6TnIRAgF6AJ9Uv8mH1xPKb2OSzSh1r3tB6JDEcgCgjwrx
zeQ2m4dzfD4m84aizSrKOmQ=
=P3KG
-----END PGP SIGNATURE-----
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:53 MST