Re: AGI Prototying Project

From: Tennessee Leeuwenburg (
Date: Wed Feb 23 2005 - 21:13:44 MST

Hash: SHA1

Peter de Blanc wrote:

|> 1) X is Friendly, 2) X wishes to self-preserve, 3) X wishes to
|> reproduce
|> Its offspring must share the property of Friendliness in order
|> for X not to have violated rule 2.
| No. Friendliness is not the same thing as Not-Killing-My-Parent.
| There's a large class of goal systems which will simply preserve a
| specified being; there's a much smaller class of goal systems which
| will be Friendly. Friendliness is a terribly suboptimal solution to
| the problem of preserving the specified being, so I can't see any
| situation in which your proposed architecture would produce a
| Friendly AI.

I disagree. Friendliness is precisely what we believe is the most
optimal solution to preserving our own survival when AGI turns up. If
Friendliness is such a sub-optimal solution, then why are we trying to
build it?

| I think that natural selection in AIs is a really bad idea (if it
| were workable, which it isn't anyway). Evolution has never selected
| for Friendliness before (although it has selected for things like
| reciprocation), so why do you think that it will in the future?
Natural selection is something which is always happening. It is
happening right now. It will happen when AGI is alive. If Friendliness
is to compete and win, it must be something which is a part of an
evolutionarily fit organism. Otherwise it will fight, die and lose. We
must *make* evolution work for us.

- -T

Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Thunderbird -


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT