Re: [sl4] Friendly AIs vs Friendly Humans

From: Tim Freeman (tim@fungible.com)
Date: Wed Jun 22 2011 - 09:31:43 MDT


>Could anyone here suggest any references on a much narrower subset of
>this problem: limiting the form of AI designs being considered to
>human-like minds (possibly including actual emulations of human
>minds), is it possible to solve the FAI problem for that subset - or,
>put another way, instead of preventing Unfriendly AIs and allowing
>only Friendly AIs, is it possible to avoid "Unfriendly Humans" and
>encourage "Friendly Humans"? If so, do such methods offer any insight
>into the generalized FAI problem? If not, does that imply that there
>is no general FAI solution?
>
>
>And, most importantly, how many false assumptions are behind these
>questions, and how can I best learn to correct them?

Humans are not Friendly. 1% are schizophrenic. 4% are sociopaths.
Even the normal ones seem to go to war regularly, and that's against
other humans like them that they might be expected to have empathy
for. I can't see why a human-like FAI would regard us as
conspecifics, so the relationship might be more like human to cow than
human to human. I would therefore expect to do even worse with a
powerful human-like AI than losing a war with a bunch of humans.

Human society more-or-less works in part because that humans are
limited in their ability to wield power, both in time (because they
get old and die) and in cognitive capacity. Neither of these
limitations would apply to a human-like AI.

>If not, does that imply that there is no general FAI solution?

Can't see why. You haven't said anything about AI's that don't
resemble humans, nor did I say anything about them above, so it's not
clear how to convert your statements into a proof that there is no
general FAI solution.

Sorry, I don't have references offhand.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT