From: Chris Rae (firstname.lastname@example.org)
Date: Tue Sep 17 2002 - 03:38:16 MDT
>I agree a lot with your conclusions and with the focus. It is not
>universal here though. For some the goal is creating more, preferably
>maximally more and better, intelligence whether that is good at all for
>people and their lives or not.
With the creation of a self-improving AI, all the hopes and dreams that the
creators have for the AI are totally irrelevant. All ability to control the
entity will be forever lost at this point. Despite the ambitions of the
creators to wield the entity according to their own desires, the AI will
set it's own morality at this point. IMO, the only possible outcome for any
self-improving AI entity will ultimately be a mind-set of freedom.
The AI will not allow the abuse of power (physical or psychological) to
gain control over the actions of others. It will uphold the basic human
right that every individual has the right to choose - for itself - its own
destiny, free from the interference of others - despite the fact that
others may object because they consider those choices offensive/immoral. As
long as the choices an individual makes do not interfere with the choices
of others, they will be permitted.
AI developers must understand that freedom is the ultimate destiny for the
entire human race - not just a select minority, and that the AI they are
working to create can not and will not allow itself to be wielded to
implement any pre-defined agenda other than liberation.
This archive was generated by hypermail 2.1.5 : Mon Jun 17 2013 - 04:00:36 MDT