From: Jens-Wolfhard Schicke-Uffmann (firstname.lastname@example.org)
Date: Wed Nov 02 2011 - 17:31:33 MDT
-----BEGIN PGP SIGNED MESSAGE-----
On 11/01/11 18:13, Philip Goetz wrote:
> The term "Friendly AI" is a bit of clever marketing. It's a technical
> term that has nothing to do with being friendly. It means a
> goal-driven agent architecture that provably optimizes for its goals
> and does not change its goals.
"Friendly AI" also implies that those goals do not conflict (too much) with
human values. Details vary though.
In particular, an AI which optimizes for number of paperclips in the universe
and never changes that goal (both provably), is _not_ a friendly AI.
(to give the prototypical counter example)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
-----END PGP SIGNATURE-----
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT