Re: [sl4] Friendly AIs vs Friendly Humans

From: Byrne Hobart (
Date: Wed Nov 02 2011 - 18:20:29 MDT

Given a sufficiently low discount rate, a paperclip-optimizing AI could be
far more friendly to human goals than the non-AI alternative. And I'm going
to go out on a limb and assume that any good AI will have a ridiculously
low discount rate.

>From a chicken's perspective, humans are an optimizing-for-omelet
omnipotent AI. And yet we're better than foxes.

See the "Thousand-year Fnarg"

On Wed, Nov 2, 2011 at 4:31 PM, Jens-Wolfhard Schicke-Uffmann <> wrote:

> Hash: SHA1
> On 11/01/11 18:13, Philip Goetz wrote:
> > The term "Friendly AI" is a bit of clever marketing. It's a technical
> > term that has nothing to do with being friendly. It means a
> > goal-driven agent architecture that provably optimizes for its goals
> > and does not change its goals.
> "Friendly AI" also implies that those goals do not conflict (too much) with
> human values. Details vary though.
> See:
> In particular, an AI which optimizes for number of paperclips in the
> universe
> and never changes that goal (both provably), is _not_ a friendly AI.
> (to give the prototypical counter example)
> Jens
> Version: GnuPG v1.4.10 (GNU/Linux)
> iEYEARECAAYFAk6x0tUACgkQzhchXT4RR5ABTgCgk2IM/4em1u6bG0ccCf198xTb
> uyMAnRfvNHWVDuAIFHQtxlzs5C5PydT6
> =ntOj

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT