From: Samantha Atkins (firstname.lastname@example.org)
Date: Sun Oct 24 2004 - 03:25:21 MDT
On Oct 22, 2004, at 12:18 PM, Jeff Medina wrote:
> "To rely on public donations is not much different than a monk holding
> out a bowl (you never know where the next meal is going to come from).
> Unless idealistic individuals in the upper classes can be enlisted to
> support the cause, it might be more productive to create capital
> through some sort of technological innovation. There are more than a
> few clever minds on the SL4 list that could do this."
> I agree here as well. Although it may seem as though it would be a
> waste of time to work on a project purely for profit, the benefits to
> SIAI in terms of funding gains more than outweigh the time 'wasted',
> in my opinion... making such profit-seeking a moral imperative.
At one time I was more of this opinion. Then I examined my own 25
years of commercial programming continuously believing that some day I
would get far enough ahead or be around at the right IPO so that I
would have the funds to do what I really believe is best for me to do
with whatever remained of my life. Bootstrapping is of course more
direct than this. But one must be careful not to get so involved in
"business" that one forgets the real goal or has almost no time for it.
In my opinion what is critically required is that Eliezer and others
do precisely what is truly of maximal benefit and that they be
supported in doing so regardless of whether that work has mostly inline
marketable consequences or not. The marketable consequences that are
not dangerous should be exploited and some of the profit fed back in.
But this is not likely the best use of Eliezer's time to obtain maximal
A real strong evangelist or set of such is needed. That individual or
group can "break it down" to a case that wins the support of sufficient
contributions. Raising that kind of money is very doable we just
don't have the right evangelist or evangelical approach yet.
> Imagine if a sub-unit of SIAI (or WTA, etc.) were devoted to *becoming
> rich* to help the cause; if a group can't *find* millionaires to
> donate huge sums of money to its morally-critical campaign, it
> behooves that group to *create* millionaires to donate huge sums of
> money, if at all possible (and intelligent, motivated/passionate,
> hard-working people quite plausibly can create millions if they set
> their minds to it.)
That is a round-about approach. Millionaires aplenty already exist.
Angel investors exists. A faster approach is to learn how or find
someone who knows how to successfully win them to the cause. Nor
should the efforts be limited to the relatively wealthy. Many churches
attempt to extract 10% in tithes. If mystical (and usually watered
down even for mysticism) believe systems can demand that much without
everyone leaving in outrage, can we not donate so much as what we spend
on our DSL or Cable TV a month? If we cannot do we even honestly
expect people to believe that even we take the problem seriously or
believe there is any possibility of solution?
I certainly have my own doubts about the current approach. But I
believe strongly that the problems we face are dire and urgent and that
conventional solutions are not adequate. I know that increased
effective intelligence is required. That doesn't mean that only SAI
is a possible solution. It is an attempt at a solution. As such it
deserves to be supported.
> "While saving the world is, indeed, an infinite return on investment,
> a wiki posting is generally considered a poor return on an investment
> of 6 months of donations. For the foreseeable future, I see SIAI
> producing more of the latter than the former."
> This reminds me of a another approach to garnering research support
> that SIAI has apparently eschewed and left me wondering why: instead
> of Wiki and intelligence.org publications, why not submit papers on AGI
> and AI ethics / existential risk to peer-reviewed publications and
> conferences? It seems to me there are a number of good ideas, and
> Eliezer is a sufficiently good writer of these ideas, to succeed in
> getting a lot of work into the consciousness of academia (Artificial
> Intelligence / Computing and Philosophy / Ethics & Applied Ethics,
> Science & Technology Studies, etc.). And from there, Eliezer et al.
> would a decent shot at research positions in academia, basically
> providing much of the funding toward research that is currently being
> scrounged around for. Even a trade-off such as being required to teach
> AI or ethics to undergraduates sounds worthwhile, esp. considering the
> meme-spreading one could do by devoting part of class time to SAI/FAI
Every academic I know tells of the death by a thousand cuts that is the
academic life. But I will defer to Dr. Ben Goertzel for a more
considered opinion on whether academic standing, papers, symposia and
so on leads to more funding and research opportunity for doing SAI.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT