From: Anthony Berglas (anthony@berglas.org)
Date: Sun Jun 15 2008 - 19:42:00 MDT
>One of the assumptions you make in the paper is that there will be
>lots of AI's with lots of different motives, and that those with the
>motives of world domination at the expense of everything else will
>prevail. But realistically, people will program AI's to help
>themselves or their organisations gain wealth and power, and achieving
>that goal would involve preventing other people with their AI's from
>gaining the upper hand. In general it's only possible to prevail if
>you alone have the superior technology. This argument doesn't apply if
>there is a hard take-off singularity, in which case our only hope is
>to make the first AI reliably Friendly.
My assumption is actually a little sharper. Namely
If an AI is good at world domination, then it would be good at
world domination.
Whether such an AI will exist is a separate question. But many
people desire it -- beat the competitor. So the source of goals is
not unlikely.
I am adding this last point to my paper, due to feedback from this list.
Thanks,
Anthony
>--
>Stathis Papaioannou
Dr Anthony Berglas, anthony@berglas.org Mobile: +61 4 4838 8874
Just because it is possible to push twigs along the ground with ones nose
does not necessarily mean that is the best way to collect firewood.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT