Re: [sl4] Friendly AIs vs Friendly Humans

From: Robin Lee Powell (
Date: Tue Jun 21 2011 - 00:50:57 MDT

On Tue, Jun 21, 2011 at 02:36:51AM -0400, DataPacRat wrote:
> Could anyone here suggest any references on a much narrower subset
> of this problem: limiting the form of AI designs being considered
> to human-like minds (possibly including actual emulations of human
> minds), is it possible to solve the FAI problem for that subset -
> or, put another way, instead of preventing Unfriendly AIs and
> allowing only Friendly AIs, is it possible to avoid "Unfriendly
> Humans" and encourage "Friendly Humans"?

The problem there is that to achieve a mathematically-precise
structure that would predictably encompass such minds and their
growth through many rounds of self-improvement, we'd have to be able
to describe such minds in a mathematically-precise fashion, and
we're just not there yet.

It seems very likely to me that we'll have a much easier time
creating such precision around minds that we built from the ground
up, in the same way that computers or cars are easier to understand
than human brains or giraffe bodies ( ); human-like minds are
*extremely* anti-optimized for this purpose, as you can observe by
opening a newspaper every day. It will almost certainly be much
easier to work with a mind designed around such ideas, so that the
mind and the friendliness proof are part of the same mathematical

> lu .iacu'i ma krinu lo du'u .ei mi krici la'e di'u li'u traji lo
> ka vajni fo lo preti

mi na jimpe le krinu be zo .ei .i pe'i na srana .i ji'a do nitcu zo
kei zu'a zo fo .i li'a lu vajni fo lo preti li'u na se djisku


-- :  Our last, best hope for a fantastic future.
Lojban ( The language in which "this parrot
is dead" is "ti poi spitaki cu morsi", but "this sentence is false"
is "na nei".   My personal page:

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT