Re: AI debate at San Jose State U.

From: Olie Lamb (olie@vaporate.com)
Date: Wed Oct 19 2005 - 21:37:24 MDT


Richard Loosemore wrote:

> <Snip>
>
> As far as I am concerned, the widespread (is it really widespread?)
> SL4 assumption that "strictly humanoid intelligence would not likely
> be Friendly ...[etc.]" is based on a puerile understanding of, and
> contempt of, the mechanics of human intelligence.

It's not difficult to show this by a definitive fiat:

Woody Long defined Humanoid intelligence:

1. "Humanoid intelligence requires humanoid interactions with the world" --
MIT Cog Project website

This means a fully "human intelligent" SAI must...feel the thrill of victory and the agony of defeat.

If we accept that to be "humanoid", an intelligence must get pissed-off
at losing, we can also define "humanoid" as requiring self-interest
/selfishness, which is exactly the characteristic that I thought
friendliness was trying to avoid. An intelligence that cares for all
intelligences will necessarily care for its own well being. Putting
emphasis on one particular entity, where the interests are particularly
clear, is the start of unfairness. Strong self interest is synonymous
with counter-utility. You don't need to get stabby and violent for
egocentrism to start causing harm to others. Anyhoo, strong self
interest does not necessarily lead to violent self-preservation, but it
has a fair degree of overlap.

> Whereof you disdain not to understand, thereof you should not speak.
>
> Richard Loosemore

It doesn't always take full understanding to know when something is
wrong. I don't speak much German (just a few words), but I can often
tell when someone is way off. It doesn't take an Olympic level cyclist
to recognise when someone has fallen flat on their face. That said,
humility is a good thing

-- Olie



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:13 MST