From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Wed Feb 23 2005 - 14:38:20 MST
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Phil Goetz wrote:
| --- Tennessee Leeuwenburg <tennessee@tennessee.id.au>
| wrote:
|
|>Well, I think designing in a reasonable sense of
|>self-preservation of
|>identity into the AGI and using a reproductive model
|>will mostly take
|>care of things.
|
|
| I think it is exactly the opposite. As long as we
| don't do that, we should be safe.
|
| The type of behavior we are afraid of in an AI
| is the type of behavior we've seen in humans
| seeking personal power and pleasure. Humans
| seek these things only because these goals have
| evolved. A constructed machine would have no
| such inner will-to-power. Neither would a machine
| without a sense of identity.
|
| So the only great threats of unfriendly AI are
| posed by "seed AI" and other evolutionary approaches.
That is a claim, not an argument.
If A => B and B => C then A => C
1) X is Friendly,
2) X wishes to self-preserve,
3) X wishes to reproduce
Its offspring must share the property of Friendliness in order for X not
to have violated rule 2.
Your points are good, but I would like to further deconstruct my
argument before looking around...
- -T
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFCHPfMFp/Peux6TnIRAtEuAJwI1NYEer+tyzQvz0ZoiP7D4BndjgCfSSY5
iDC9wxtR2UP9AUuwkvO8qSw=
=QyD+
-----END PGP SIGNATURE-----
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT