Re: Singularity Institute: Likely to win the race to build GAI?

From: Mike Dougherty (msd001@gmail.com)
Date: Wed Feb 15 2006 - 17:09:33 MST


I point I don't understand is why anybody thinks there is an AGI "race"

Would it be valuable for a soup kitchen to boast that it will be the one to
feed the most hungry people this year? Or would a women's shelter brag
about having cared for the largest number of domestic violence victims? It
seems ludicrous to allow pride and ego to overshadow noble goals like
humanitarian efforts. If the proposed benefits of super-intelligence (as
recursively self-modifying AI will quickly become) are not humanitarian*
efforts, then nothing about AGI would be. If I understand the stated goal
of reaching the Singularity correctly (which may be debated) then it should
not be about Intellectual Property Rights or hubris or ego-driven ideology
that moves us.
  If the madness to be the "first" throws cautious reason to the wind, there
will be nothing remotely like "Friendliness" involved in the resulting
product of such a method. Maybe I haven't posted much in terms of "P" this
or O() that and it may be that my additions to this list have been 'mere'
philosophy, but it just seems like good sense. The obsession with
denouncing others' attempts at achieving >something< can not be helping
anybody. Having a community for the sake of casually claiming our own
proprietary code is better than someone else's seems like an unproductive
waste of time. Where are the links to shareable modules that illustrate
either potentially useful concepts or explanations of why the apparent
promise will ultimately fail? We're not in a cold-war (i hope) so we should
not be protecting ourselves from each other's critical review. I would love
to be involved. So far the only open call for employment I've seen has said
I would not be smart enough or crazy enoough. For the record, I would like
some way to prove or disprove even THAT assumption.

*humanitarian: in the sense that it helps benefit humans, or the group
collectively believed to be "US" -which may include sentient non-humans or
such technology-assisted biological intelligence that it no longer passes
the regular test for "human"

On 2/15/06, Peter Voss <peter@optimal.org> wrote:

> Another assumption that I don't share is that "normal" AGI is likely to be
> "Unfriendly" - ie. detrimental to us.
>



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:30 MST