Re: [sl4] Friendly AIs vs Friendly Humans

From: Jeff Medina (
Date: Tue Jun 21 2011 - 08:53:20 MDT

On Tue, Jun 21, 2011 at 2:36 AM, DataPacRat <> wrote:
> Could anyone here suggest any references on a much narrower subset of
> this problem: [...]

This has been written about, but not really as a separate set of
references from the FAI/AI risk/hard takeoff literature more
generally. It's not like the literature on FAI is huge and needs to be
narrowed anyway, if one wants to think about it seriously. (Of course
that's ignoring other relevant/related reading that is useful for
thinking about FAI in the first place, mostly from subsets of compsci,
cogsci, biology, math).

So you'd need to read through the FAI-related lit to find the existing
commentary on human-type minds, or at least run a find query over the
soft copies for "human" to narrow down where you're looking.

Michael Anissimov compiled sources on hard takeoff, which is tied
together with the AI risks / FAI issue:

> And, most importantly, how many false assumptions are behind these
> questions, and how can I best learn to correct them?

False (meta-)assumption: The questions you put forward implicitly
assume you think it's useful to look for insights into the issue
without reading through what some smart folks have already written
about it. This is a common human thing to do -- I'm sure I've done it,
too, and I will surely do it again -- but it is generally a waste of
time in terms of moving understanding of an issue forward (it is
potentially not a waste of time if you're doing it just to have fun
discussing/debating ideas).

Compare: speculation on improving the standard model in physics
without reading prior work. Seems ridiculous in that context, but for
subjects that appear on the surface less obviously math-laden, our
habit is to try to jump to solutions pretty quickly.

Jeff Medina
"Do you want to live forever?"
"Dunno. Ask me again in five hundred years."
(_Guards! Guards!_, Terry Pratchett)

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT