Re: [sl4] Friendly AIs vs Friendly Humans

From: Jens-Wolfhard Schicke-Uffmann (drahflow@gmx.de)
Date: Wed Nov 02 2011 - 17:31:33 MDT


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 11/01/11 18:13, Philip Goetz wrote:
> The term "Friendly AI" is a bit of clever marketing. It's a technical
> term that has nothing to do with being friendly. It means a
> goal-driven agent architecture that provably optimizes for its goals
> and does not change its goals.

"Friendly AI" also implies that those goals do not conflict (too much) with
human values. Details vary though.
See: http://en.wikipedia.org/wiki/Friendly_artificial_intelligence

In particular, an AI which optimizes for number of paperclips in the universe
and never changes that goal (both provably), is _not_ a friendly AI.
(to give the prototypical counter example)

Jens
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iEYEARECAAYFAk6x0tUACgkQzhchXT4RR5ABTgCgk2IM/4em1u6bG0ccCf198xTb
uyMAnRfvNHWVDuAIFHQtxlzs5C5PydT6
=ntOj
-----END PGP SIGNATURE-----



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT