From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Sun Mar 06 2011 - 11:42:05 MST
BTW, as far as I know the interesting parts of this list have mostly
moved to lesswrong.
On Sun, Mar 06, 2011 at 01:14:23PM +0000, Amon Zero wrote:
> it seems reasonable to expect that non-Friendly AGI would be
> easier to develop than Friendly AGI
Certainly, which is why SIAI started before anyone else with a brain
was really seriously working on AGI (ignoring the 1960s, of course).
You should donate. :)
> (even if FAI is possible, and there seem to be good reasons to
> believe that universally Friendly superhuman AGI would be
> impossible for humans to develop).
I haven't seen any that seemed compelling to me yet.
> Because Friendliness is being worked on for very good (safety)
> reasons, it seems to me that we should be thinking about the
> possibility of "locally Friendly" AGI, just in case Friendliness
> is in principle possible, but the full package SIAI hopes for
> would just come along too late to be useful.
>
> By "locally Friendly", I mean an AGI that respects certain
> boundaries, is Friendly to *certain* people and principles, but
> not ALL of them. E.g. a "patriotic American" AGI. That may sound
> bad, but if you've got a choice between that and a completely
> unconstrained AGI, at least the former would protect the people it
> was developed by/for.
Probably not a terrible fallback position, but I strongly suspect
that in practice the two problems ("self-improvement whilst provably
loving everybody" and "self-improvement whilst provably loving those
people over there") are exactly the same difficulty.
-Robin
-- http://intelligence.org/ : Our last, best hope for a fantastic future. Lojban (http://www.lojban.org/): The language in which "this parrot is dead" is "ti poi spitaki cu morsi", but "this sentence is false" is "na nei". My personal page: http://www.digitalkingdom.org/rlp/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT