From: Zeph Campbell (zephc@hotlinehq.com)
Date: Mon Mar 31 2003 - 10:39:20 MST
I'm sure an SI/AGI could easily manipulate the will of any human,
convincing the human that he/she is doing the right thing by letting
the SI take control (for whatever reason). Humans do it to other
humans all the time. A Friendly SI might be able to do so as well, if
it didn't understand human suggestibility. But then, one hopes a
Friendly SI would know a great deal about human psychological quirks
before trying to 'help out' anyway.
- Zeph
On Monday, March 31, 2003, at 08:46 AM, SMcClenahan@ATTBI.com wrote:
> As one friendly human to another, if I was to "help" you without
> solicitation,
> it could be interpreted either as some sort of hostile takeover action,
> invasion of privacy, etc. Even if I just provided assistance, without
> the right
> balance of help versus assistance, the assistees opinion could swing
> either
> way. I assume that one of the goals of intelligence in Friendliness is
> to
> understand that balancing act and know when to take over, assist, or
> leave
> alone another being's actions for them to become overall happy or
> happier.
>
> It all still comes down to pleasure/pain model. Humans (and generally
> all
> sentients) want to increase pleasure and decrease pain. The actions we
> take to
> achieve this is reflected by our level of intelligence. Most, if not
> all,
> people only have intelligence in a limited range of problem domains.
> An AGI is
> designed to be generally intelligent across all problem domains. A
> Friendly AGI
> should understand the pleasure/pain model of human existence and be
> able to act
> accordingly, hence the friendly part.
>
> cheers,
> Simon
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT