From: Jeff Medina (email@example.com)
Date: Fri Oct 22 2004 - 13:18:02 MDT
"To rely on public donations is not much different than a monk holding
out a bowl (you never know where the next meal is going to come from).
Unless idealistic individuals in the upper classes can be enlisted to
support the cause, it might be more productive to create capital
through some sort of technological innovation. There are more than a
few clever minds on the SL4 list that could do this."
I agree here as well. Although it may seem as though it would be a
waste of time to work on a project purely for profit, the benefits to
SIAI in terms of funding gains more than outweigh the time 'wasted',
in my opinion... making such profit-seeking a moral imperative.
Imagine if a sub-unit of SIAI (or WTA, etc.) were devoted to *becoming
rich* to help the cause; if a group can't *find* millionaires to
donate huge sums of money to its morally-critical campaign, it
behooves that group to *create* millionaires to donate huge sums of
money, if at all possible (and intelligent, motivated/passionate,
hard-working people quite plausibly can create millions if they set
their minds to it.)
"While saving the world is, indeed, an infinite return on investment,
a wiki posting is generally considered a poor return on an investment
of 6 months of donations. For the foreseeable future, I see SIAI
producing more of the latter than the former."
This reminds me of a another approach to garnering research support
that SIAI has apparently eschewed and left me wondering why: instead
of Wiki and intelligence.org publications, why not submit papers on AGI
and AI ethics / existential risk to peer-reviewed publications and
conferences? It seems to me there are a number of good ideas, and
Eliezer is a sufficiently good writer of these ideas, to succeed in
getting a lot of work into the consciousness of academia (Artificial
Intelligence / Computing and Philosophy / Ethics & Applied Ethics,
Science & Technology Studies, etc.). And from there, Eliezer et al.
would a decent shot at research positions in academia, basically
providing much of the funding toward research that is currently being
scrounged around for. Even a trade-off such as being required to teach
AI or ethics to undergraduates sounds worthwhile, esp. considering the
meme-spreading one could do by devoting part of class time to SAI/FAI
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT