From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Thu Oct 20 2005 - 05:22:30 MDT
Olie Lamb wrote:
> If we accept that to be "humanoid", an intelligence must get pissed-off
> at losing, we can also define "humanoid" as requiring self-interest
> /selfishness, which is exactly the characteristic that I thought
> friendliness was trying to avoid. An intelligence that cares for all
> intelligences will necessarily care for its own well being. Putting
> emphasis on one particular entity, where the interests are particularly
> clear, is the start of unfairness. Strong self interest is synonymous
> with counter-utility. You don't need to get stabby and violent for
> egocentrism to start causing harm to others. Anyhoo, strong self
> interest does not necessarily lead to violent self-preservation, but it
> has a fair degree of overlap.
Unusual line of reasoning, but nicely put, assuming 'counter-utility' is
in the context of a Friendly utility function. 'Creating a Friendly AI'
has makes some additional points in this vein.
* Michael Wilson
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT