From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Sat May 24 2003 - 18:43:11 MDT
> But I think there are aspects of the AGI morality issue that the
> Institute itself hasn't even flagged.
> Such as?
(From memory) They don't consider the need for AGIs to be friendly to
anything else other than humans. And they see the goal to be
achieving friendliness whereas I think we need to have friendliness
plaus wisdom. (Trying to specify wisdom and implement wisdom
capability is going to be tricky for sure, but I suspect it's necessary too.)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT