From: Petter Wingren-Rasmussen (petterwr@gmail.com)
Date: Fri Feb 20 2009 - 11:30:16 MST
On Fri, Feb 20, 2009 at 5:40 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:
>
> I think your initial approach to evolving tit-for-tat strategies in simple
> environments should work. However, when AI reaches human level intelligence
> (they can do everything the human brain can do) then you are in danger of no
> longer controlling the awarding of points. When the AI can interact with you
> through language, it could convince you to modify the program that controls
> their evolution. If the AI knows everything that you do, then there is no
> way for you to tell if it is helping you achieve your goals or not. It knows
> exactly which lies it can get away with. It might be following tit-for-tat
> with you, or it might realize that eliminating humanity would be its final
> move with you.
>
Yeah, I think you´re right, especially when it comes to superhuman
intelligence.
My hope is though that the AI will believe in its "genetic" heritage in the
same way that most humans do and continue to cooperate instead.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT