From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Thu Oct 15 2009 - 11:17:04 MDT
On Mon, Oct 12, 2009 at 09:57:51AM -0500, Bryan Bishop wrote:
> On Mon, Oct 12, 2009 at 9:30 AM, Thomas Buckner wrote:
> > This is why such discussions get sniped by the moderator. The
> > whole sl4 forum is meant to further ideas about how to make sure
> > the AGI (Artificial General Intelligence) is Friendly, i.e. that
> > it doesn't do badly by us, even unintentionally. It's an
> > insanely hard problem and some very smart people disagree about
> > the best approach.
>
> http://sl4.org/
>
> "... The SL4 mailing list is a refuge for discussion of advanced
> topics in transhumanism and the Singularity, including but not
> limited to topics such as Friendly AI, strategies for handling the
> emergence of ultra-powerful technologies, handling existential
> risks (planetary risks), strategies to accelerate the Singularity
> or protect its integrity, avoiding the military use of
> nanotechnology and grey goo accidents, methods of human
> intelligence enhancement, self-improving Artificial Intelligence,
> contemporary AI projects that are explicitly trying for genuine
> Artificial Intelligence or even a Singularity, rapid Singularities
> versus slow Singularities, Singularitarian activism, and more."
>
> I'm ready to leave if we're all supposed to be on board the FAI
> train. Is sl4.org wrong or not representative of what this list is
> actually about?
As far as I know, sl4.org is correct, and I haven't seen anything
sniped on here in *ages*.
-Robin
-- They say: "The first AIs will be built by the military as weapons." And I'm thinking: "Does it even occur to you to try for something other than the default outcome?" See http://shrunklink.com/cdiz http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT