Re: Existential Risk and Fermi's Paradox

From: Stefan Pernar (
Date: Wed Jun 20 2007 - 21:56:26 MDT


On 6/21/07, Toby Weston <> wrote:

> Of course this AI
> could choose to be enthralled by anything, but why
> would it, if it knew it was all built on sand?

Someone will be building this AI and will give it a purpose in the form of a
super goal. In that sense it will not choose to do anything - merely act on
the execution of said goal. I have sent a paper to the AGI mailing list
yesterday that proposes such a super goal.

You can find the paper at

Kind regards,


Stefan Pernar
App. 1-6-I, Piao Home
No. 19 Jiang Tai Xi Lu
100016 Beijing
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT