FAI prioritization

From: Rolf Nelson (rolf.h.d.nelson@gmail.com)
Date: Wed Apr 02 2008 - 18:12:09 MDT


For the duration of this thread, assume that FAI is the best use of
time and resources for a rational altruist. How should resources be
prioritized, in terms of marginal utility? Here are my current
thoughts.

My prioritization:

1. Outreach to mathematically-talented students and recent graduates.
We know that some tiny minority of people exist who will self-motivate
to help with FAI after a quick exposure to these talking points:

* AGI may be possible within the next few decades

* AGI can be dangerous. Suppose you initially give an AI the goal of
making paperclips, with the plan that you will shut the AI down or
modify its goals once you decide you have "enough paperclips."
However, the AI would rather replace the Earth with paperclips
according to its current goal, so it will spontaneously form a subgoal
of preventing you from being willing and able to shut it off or
rewrite its goals.

* Extremely few people are addressing this problem, for example, most
people building an AGI do not put a high priority taking this danger
into account.

* From a utilitarian point of view, this Friendly AI problem is
probably the best use of your time and resources, even compared with
other direly under-addressed problems in society.

Have a reasonable percentage of the most talented students been
exposed to these talking points? Until the answer to that question is
"yes", my current belief is that this is the best marginal use of
resources.

2. Publish a technical document, or better yet a computer program,
that would produce a Friendly outcome, given infinite computing
resources. Presumably such a program would not significantly help
someone to accidentally build a UFAI, but would provide evidence of
whether the FAI community has reached some level of convergence as to
the desired outcome.

3. Put interested people to work solving, on an algorithmic level,
the problems of how to create an FAI with limited computer resources.

4. Track who is working on an AGI, and what evidence there is that
they will or won't succeed.

5. Determine what the current fundamental (causal) points of
disagreement are with and between other AI researchers.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT