From: Aleksei Riikonen (aleksei@iki.fi)
Date: Tue Jun 17 2008 - 05:44:06 MDT
On Mon, Jun 16, 2008 at 4:42 AM, Anthony Berglas <anthony@berglas.org> wrote:
>> One of the assumptions you make in the paper is that there will be
>> lots of AI's with lots of different motives, and that those with the
>> motives of world domination at the expense of everything else will
>> prevail. But realistically, people will program AI's to help
>> themselves or their organisations gain wealth and power, and achieving
>> that goal would involve preventing other people with their AI's from
>> gaining the upper hand. In general it's only possible to prevail if
>> you alone have the superior technology. This argument doesn't apply if
>> there is a hard take-off singularity, in which case our only hope is
>> to make the first AI reliably Friendly.
>
> My assumption is actually a little sharper. Namely
> If an AI is good at world domination, then it would be good at world
> domination.
Are you making the error of thinking that a Friendly AI couldn't be
good at world domination?
(For the purposes of this discussion, I define "world domination" =
"preventing anything which you don't want to happen from happening".)
-- Aleksei Riikonen - http://www.iki.fi/aleksei
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT