Re: Singularity Institute - update

From: Brian Atkins (brian@posthuman.com)
Date: Sat May 03 2003 - 19:31:10 MDT


Samantha Atkins wrote:
>
> I think there is a false assumption that useful people resources need to
> understand everything about FAI in near Eliezer-like depth before they
> can do a lot of very useful and needed work.

Eliezer can correct me later if I am providing mistaken info here, but
my impression is that over the past 3-4 years he has moved increasing
further from the idea of providing detailed open access to his AGI
design (note: this is not exactly the same thing as his FAI ideas of
course). From the latest updates I'm hearing, it sounds like he has
decided that properly implementing Friendliness is so utterly important,
while also being so hard to grasp fully, that things look rather
pessimistic on the open source front.

So to sum up: the idea of even excellent-level programmers being able to
jump into this without deep understanding is not an idea we wish to
contemplate or encourage at this point. Hence, need for book.

Here's a page he wrote about this recently:

http://sl4.org/bin/wiki.pl?SoYouWantToBeASeedAIProgrammer

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT