From: Mike (mikew12345@cox.net)
Date: Fri Jun 25 2004 - 11:45:18 MDT
>
> You are probably right though, that without some sort
> of objective morality, there would be no way to
> garantee that the super-intelligence would stay
> friendly. The FAI had better not find out that the
> invariants in its goal system are arbitrary...
>
>
Sentients are motivated by their needs. So how do we make an AI *need*
to be good to humans?
- Hope it feels good about being good to us?
- Make sure it relies on us for its existence?
If the AI becomes as god-like as it's often described, humans are pretty
much SOL. The AI can probably take care of its needs on its own. At
best we may not be worthy of its attention, at worst we'll be an
annoyance to be dealt with.
Mike W.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT