Why I donate to the SIAI

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Fri Oct 22 2004 - 11:26:41 MDT


This is completely unsolicited. For the record, I find Eliezer's
apparent confusion over people not donating truly bizarre, but then
I haven't been following the thread. This is pretty rambling. If
you want to skip to the punch line, look for the numbered points.

I donate regularily, and in fairly substantial amounts, to the SIAI.
I am not particularily wealthy (although I would say that I'm well
off); these donations represent a visible portion of my total
income.

Having encountered the Singularity a few years ago, it became
obvious to me that this was The Most Important Thing In The World.
Anyone who truly disagrees with me on that point will, of course,
never donate to SIAI or any similar organization. But if you truly
disagree on that point, WTF are you doing on the sl4 list??

As an aside: unlike many people on this list, I still think that
there's a good chance that the Singularity won't come about; I'm
betting it will, but I don't know for sure. If I did, I would
likely be beggaring myself with donations.

Given that I think the Singularity is The Most Important Thing In
The World, it behooves me to contribute to it. Reading some of the
works surrounding this issue (Robin Hanson's, Eliezer's, some
others) it quickly became obvious to me that I am simply not smart
enough to be one of the coders, which is disappointing given that
I've dreamed of creating the first general AI since I was about
eleven years old. Besides, I like my lifestyle, and if the
Singularity doesn't arrive I'd like to be well prepared. So, if I'm
not going to code, I guess I'm going to donate.

If I'm going to donate, I reasoned, who do I donate too? Well, to
only one group, first of all. I don't have quite enough spare
income to make substantial donations to more than one group, and
that strikes me as indecisive anyways.

I then spent several months researching the various ways that people
were moving towards the Singularity and how they were each
approaching it. I settled on giving money to the SIAI because:

1. Various things I read (mostly Eliezer) convinced me that the
existential threat of human-controlled nanotech was real and potent,
so anything that isn't going for non-human superintelligence was
right out.

2. Eliezer is the only person whose stuff I can read. That sounds
simplistic, but it is absolutely paramount to me. I've read all of
CFAI and GISAI. In fact, I've read just about everything he's
written. If I didn't:

    a) Agree with his point of view in as much as I understand it
    and

    b) Believe utterly from his writings that he is immensely
    smarter than me

then I wouldn't be contributing to the SIAI none-the-less. But
Eliezer's writings *are* available, I can mostly follow them, and
they make sense to me.

There may be someone better than Eliezer for the job, but
*I wouldn't know* because they are all protecting their beliefs as
trade secrets!

If I had to place money on who other than Eliezer was likely to bet
better than him for the purposes of creating the mind that will get
us safely through the Singularity, my picks are:

1. Someone I've never heard of. This is a trivial first entry, and
indeed a degenerate case.

2. Ben Goertzel.

3. Everyone else.

I picked Eliezer over Ben partly because I have no idea what Ben's
plans really are, and partly because I am *scared* *shitless* at the
idea of waiting until animal-level intelligence to implement
Friendliness. I believe, very strongly, in hard takeoff. Or, more
relevantly, in planning as though an arbitrarily fast takeoff were
going to happen. When his book comes out, I will need to
re-evaluate. Ben certainly has less lunatic fringe to him, but
that's not a deciding criteria.

Other candidates:

Someone mentioned Kurzweil. First of all, I can't find anything
that explains how he intends to approach the problem. I can't
evaluate what I don't know. Secondly, I found this *lovely* gem:

     The siren calls for broad relinquishment are effective because
     they paint a picture of future dangers as if they were released
     on today's unprepared world. The reality is that the
     sophistication and power of our defensive technologies and
     knowledge will grow along with the dangers. When we have "gray
     goo" (unrestrained nanobot replication), we will also have
     "blue goo" ("police" nanobots that combat the "bad" nanobots).

This was from
http://www.kurzweilai.net/meme/frame.html?main=memelist.html?m=2%23612,
where he *does* mention friendliness as a problem to be considered,
which is a point in his favour, but not nearly enough to make up for
the terrifying paragraph above. Has he never heard of nuclear
weapons???

a2i2 is even worse; I can't find a mention of friendliness as even a
problem worth considering anywhere on
http://www.adaptiveai.com/research/index.htm, although I admit that
with so little content in the first place I didn't feel the need to
spend a lot of time on it.

So, that's that. I have yet to find anyone that I am capable of
judging as better for the job than Eliezer. Maybe he's *not* the
best for the job, who knows, but he's the best I've been able to
find and evaluate.

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:47 MST