Re: Paper: Artificial Intelligence will Kill our Grandchildren

From: Vladimir Nesov (robotact@gmail.com)
Date: Sat Jun 14 2008 - 02:22:38 MDT


On Sat, Jun 14, 2008 at 6:11 AM, Anthony Berglas <anthony@berglas.org> wrote:
>
> So all comments most welcome, especially as to what the paper does not need
> to say.
>

"So this paper proposes a moratorium on producing faster computers. "

Even if it works now (it won't), in the future, when nanotechnology
matures, you won't be able to ban it anyway, just as you can't ban
information now. You are trying to silence the inferno by feeding it
more coal, so that it won't be hungry for a little while.

"While a Friendly AI would be very nice, it is probably just wishful
thinking. There is simply nothing in it for the AI to be friendly to
man. The force of evolution is just too strong. The AI that is good
at world domination is good at world domination. And remember that we
would have no understanding of what the hyper intelligent being was
thinking."

There is no competition (let alone evolution), if all opposition is
controlled. It is world domination, but it doen't need to have
negative connotations from humans striving for world domination, which
comes from human evolutionary psychology. What AI will do with its
power depends on its motivation, and there is no reason that
motivation must repeat the properties of human psychology.

You have no understanding of what Kasparov is thinking when he beats
you at chess, but you do have a very good understanding that the
result of the match will be you loosing. So in fact, you do have a
very good understanding of what his thinking is about.

-- 
Vladimir Nesov
robotact@gmail.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT