From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Fri Jun 25 2004 - 17:17:09 MDT
--- Mike <mikew12345@cox.net> wrote:
> >
> > You are probably right though, that without some sort
> > of objective morality, there would be no way to
> > garantee that the super-intelligence would stay
> > friendly. The FAI had better not find out that the
> > invariants in its goal system are arbitrary...
> >
> >
>
> Sentients are motivated by their needs. So how do we make an AI *need*
> to be good to humans?
> - Hope it feels good about being good to us?
> - Make sure it relies on us for its existence?
>
> If the AI becomes as god-like as it's often described, humans are pretty
> much SOL. The AI can probably take care of its needs on its own. At
> best we may not be worthy of its attention, at worst we'll be an
> annoyance to be dealt with.
>
> Mike W.
>
>
I believe that superintelligence wishes to surround itself with more intelligence, to keep things
interesting. The universe is more interesting with us in it. If SAI doesn't want us around, we can
call this a failure of Friendliness, but it may be more accurate to call it a failure of
Intelligence: either the AI is not as smart as it should be, or it thinks we aren't as smart as we
should be. And for sure we are not (at present). But that may not matter as long as we ar not a
nuisance.
Tom Buckner
=====
__________________________________
Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT