From: Byrne Hobart (bhobart@gmail.com)
Date: Wed Nov 02 2011 - 18:20:29 MDT
Given a sufficiently low discount rate, a paperclip-optimizing AI could be
far more friendly to human goals than the non-AI alternative. And I'm going
to go out on a limb and assume that any good AI will have a ridiculously
low discount rate.
>From a chicken's perspective, humans are an optimizing-for-omelet
omnipotent AI. And yet we're better than foxes.
See the "Thousand-year Fnarg"
http://unqualified-reservations.blogspot.com/2007/05/magic-of-symmetric-sovereignty.html
On Wed, Nov 2, 2011 at 4:31 PM, Jens-Wolfhard Schicke-Uffmann <
drahflow@gmx.de> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 11/01/11 18:13, Philip Goetz wrote:
> > The term "Friendly AI" is a bit of clever marketing. It's a technical
> > term that has nothing to do with being friendly. It means a
> > goal-driven agent architecture that provably optimizes for its goals
> > and does not change its goals.
>
> "Friendly AI" also implies that those goals do not conflict (too much) with
> human values. Details vary though.
> See: http://en.wikipedia.org/wiki/Friendly_artificial_intelligence
>
> In particular, an AI which optimizes for number of paperclips in the
> universe
> and never changes that goal (both provably), is _not_ a friendly AI.
> (to give the prototypical counter example)
>
> Jens
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.10 (GNU/Linux)
>
> iEYEARECAAYFAk6x0tUACgkQzhchXT4RR5ABTgCgk2IM/4em1u6bG0ccCf198xTb
> uyMAnRfvNHWVDuAIFHQtxlzs5C5PydT6
> =ntOj
> -----END PGP SIGNATURE-----
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT