From: Tyler Emerson (emerson@intelligence.org)
Date: Tue May 18 2004 - 12:16:23 MDT
The SIAI Voice . May 2004
Bulletin of the Singularity Institute for Artificial Intelligence
A nonprofit and community for humane AI research
http://www.intelligence.org/
institute@intelligence.org
To view the online version:
http://www.intelligence.org/news/newsletter.html
To receive the bulletin by email every other month:
http://www.intelligence.org/news/subscribe.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CONTENTS
1. 2004 Website Campaign
2. 2004 Challenge Grant Challenge
3. Executive Director - Tyler Emerson
4. Advocacy Director - Michael Anissimov
5. Featured Content: What is the Singularity?
6. Donors for March and April
7. Singularity Institute FAQ
8. AI Project Update
9. New at our Website
10. Volunteer Contributions
11. Volunteer Meeting
12. Volunteer Opportunities
13. Q&A with Eliezer Yudkowsky
14. Singularity Statement from Anders Sandberg
15. Singularity Quote from Ray Kurzweil
16. Events - Transvision 2004
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The first bulletin from the Singularity Institute. We hope you find it
valuable. Comments on what we've done well, poorly, or have missed,
are welcomed. We graciously ask to know what you would like from our
updates in the coming months.
Thank you for giving time to explore the Institute.
Tyler Emerson
emerson@intelligence.org
(417) 840-5968
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. 2004 WEBSITE CAMPAIGN
3 Laws Unsafe will be a website campaign from SIAI. The campaign will
tie in to the July 16th release of "I, Robot," the feature film based
on Isaac Asimov's short story collection of the same name where his 3
Laws of Robotics were first introduced.
The 3 Laws of Robotics represent a popular view of how to construct
moral AI, and their failures were often explored by Isaac Asimov in
his stories. What we hope to do is advance the Asimov tradition of
deconstructing the 3 Laws. We want to encourage critical, technical
thinking on whether they're real solutions to moral AI creation.
If you can contribute to the success of 3 Laws Unsafe, email
institute@intelligence.org. We're especially looking for graphic and site
designers who can create the site in blog format, promoters who can
help ensure that it has a high search engine ranking for keyword
combinations related to the film, and writers who can submit content.
This project is urgent because of the film's early July release. Our
deepest thanks to everyone who contributes to its success.
3 Laws Unsafe >
http://www.intelligence.org/asimovlaws.html
Ways to Contribute >
http://www.intelligence.org/action/opportunities.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2. 2004 CHALLENGE GRANT CHALLENGE
The Singularity Institute is now seeking major donors to provide
matching funds for our $10,000 Challenge Grant Challenge for Research
Fellow Eliezer Yudkowsky - one of the leading experts on the
singularity and the development of moral AI.
Major donors to the Challenge Grant Challenge will match any donations
up to $10,000, resulting in $20,000 in possible donations. Once the
pledges for matching donations are secured, the Challenge Grant itself
will run for 90 days.
Donors may pledge by emailing institute@intelligence.org or phoning (404)
550-3847. Our sincere thanks to the first major donor, Jason
Joachim, who has pledged $2,000.
All funds go toward a subsistence salary for Yudkowsky so that he may
continue his critical research on the theory of Friendly AI - the
cornerstone of our AI project that must be sufficiently complete
before the project can responsibly begin.
For more on the value of Yudkowsky's research, see:
The Necessity of Friendly AI >
http://www.intelligence.org/friendly/why-friendly.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3. EXECUTIVE DIRECTOR - TYLER EMERSON
On March 4, 2004, the Singularity Institute announced Tyler Emerson as
our Executive Director. Emerson will be responsible for guiding the
Institute. His focus is in nonprofit management, marketing,
relationship fundraising, leadership and planning. He will seek to
cultivate a larger and more cohesive community that has the necessary
resources to develop Friendly AI. He can be reached at
emerson@intelligence.org.
More >
http://www.intelligence.org/about.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4. ADVOCACY DIRECTOR - MICHAEL ANISSIMOV
On April 7, 2004, the Singularity Institute announced Michael
Anissimov as our Advocacy Director. Michael has been an active
volunteer for two years, and one of the more prominent voices in the
singularity community. He is committed and thoughtful, and we feel
fortunate to have him help lead our advocacy. In 2004 and beyond,
Michael will represent SIAI at key conferences, engage in outreach
efforts to communities and individuals, and perform writing tasks for
conveying the Institute's mission to a wider audience. He can be
reached at anissimov@intelligence.org.
More >
http://www.acceleratingfuture.com/michael
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5. FEATURED CONTENT: WHAT IS THE SINGULARITY?
The singularity is the technological creation of smarter-than-human
intelligence. There are several technologies that are often mentioned
as heading in this direction. The most commonly mentioned is probably
Artificial Intelligence, but there are others: direct brain-computer
interfaces, biological augmentation of the brain, genetic engineering,
ultra-high-resolution scans of the brain followed by computer
emulation. Some of these technologies seem likely to arrive much
earlier than the others, but there are nonetheless several independent
technologies all heading in the direction of the singularity - several
different technologies which, if they reached a threshold level of
sophistication, would enable the creation of smarter-than-human
intelligence.
More >
http://www.intelligence.org/what-singularity.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
6. DONORS FOR MARCH AND APRIL
We offer our deepest gratitude to the following donors. They realize
the extraordinary utility of the Singularity Institute's pursuit:
responsible intelligence enhancement, a Friendly singularity, through
Friendly AI research. They have taken that very crucial step of
financial support for SIAI's research. Whether it is $10 or $1,000,
more or less, one time or each month, we ask that each in-principle
supporter become a regular donor.
Major Contributions:
* Edwin Evans
$7,000
* Mikko Rauhala
$1,200
Periodic Contributions:
* Jason Abu-Aitah
$10 (monthly)
* David Hansen
$100 (monthly)
* Jason Joachim
$150 (monthly)
* Aaron McBride
$10 (monthly)
* Ashley Thomas
$10 (monthly)
One-Time Contributions:
* Anonymous
$200
* Michael Wilson
$200
Donate to the Singularity Institute >
http://www.intelligence.org/donate.html
Why Even Small Donations Matter >
http://www.intelligence.org/small-donations-matter.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
7. SINGULARITY INSTITUTE FAQ
Q: Why does your current research focus on Artificial Intelligence?
A: Artificial Intelligence is easiest to get started on by comparison
with, say, neuroelectronics. Artificial Intelligence is easier to
leverage - in our estimate, a small to medium-sized organization
potentially can do more to advance Artificial Intelligence than to
advance neuroelectronics. Furthermore, given the relative rates of
progress in the underlying technologies, our current best guess is
that Artificial Intelligence will be developed before brain-computer
interfaces; hence, to accelerate the singularity, one should
accelerate the development of Artificial Intelligence; to protect the
integrity of the singularity, one should protect the integrity of
Artificial Intelligence (i.e., Friendly AI). Singularity strategy is a
complex question which requires considering not just the development
rate of one technology, but the relative development rates of
different technologies and the relative amount by which different
technologies can be accelerated or influenced. At this time Artificial
Intelligence appears to be closer to being developed, to be more
easily accelerable, to require fewer resources to initiate a serious
project, and to offer more benefit from interim successes.
If the Singularity Institute had enough resources to fully support
multiple projects, we would branch out; but until then, it seems wise
to focus research efforts on one project.
More >
http://www.intelligence.org/institute-faq.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
8. AI PROJECT UPDATE
The centerpiece of the SIAI's effort to bring about a Friendly
singularity is an upcoming software development project. The aim is to
produce the world's first artificial general intelligence; a Friendly
"seed AI." To do this we will employ the most advanced theoretical
framework for seed AI available, the architecture derived from, but
much more comprehensive and sophisticated, than that described in
"Levels of Organization in General Intelligence." As of May 2004, this
framework is close to completion, but a great deal of work remains to
be done on the associated Friendliness theory. It is the policy of the
Singularity Institute to not initiate a project with a major potential
for existential risk until it has been proven that the net result will
be a positive one for all of humanity. Fortunately we have made strong
progress with a formal theory of Friendliness (the document "Creating
Friendly AI" describes an informal precursor) and will continue to
develop it until it's complete enough to allow project initiation.
Although we are not yet ready to start building Friendly AI, we are
close enough to begin forming the development team. At present, we
have two confirmed team members, including Eliezer Yudkowsky. The SIAI
is now actively searching for Singularitarians with software
engineering and cognitive science expertise to join the development
team. Volunteers who make the grade may be able to start work on a
part or full-time basis immediately.
The search for suitable Friendly AI developers is a top priority for
SIAI. If you believe you may be suitable or know of someone who may,
please read "Team member requirements" and then consider getting in
touch at institute@intelligence.org. We are searching for nothing less than
the core team to fulfill our mission; we need the very best we can find.
Team member requirements >
http://www.sl4.org/bin/wiki.pl?SoYouWantToBeASeedAIProgrammer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
9. NEW AT OUR WEBSITE
About Us Page Updated >> http://www.intelligence.org/about.html
Read about our mission, board, staff and accomplishments
Chat >> http://www.intelligence.org/chat
Access our volunteer chat room through the Java applet
Singularity Quotes >> http://www.intelligence.org/comments/quotes.html
Quotes from Vernor Vinge, Ray Kurzweil, Hans Moravec and more
Tell Others about the Singularity Institute >>
http://www.intelligence.org/tell-others.html
Spread the knowledge - open the opportunity - to your email circle
Become a Singularity Volunteer >>
http://www.intelligence.org/volunteer.html
Contribute your time and talent to a safe singularity
Why We Need Friendly AI >>
http://www.intelligence.org/friendly/why-friendly.html
Why Moore's Law is no friend to Friendly AI research
Donations Page Updated >> http://www.intelligence.org/donate.html
Contributions may be made monthly or yearly
Feedback >> http://www.intelligence.org/feedback.html
Your comments, questions and suggestions are welcomed
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
10. VOLUNTEER CONTRIBUTIONS
The notable progress made in March and April was possible because of
considerable volunteer help. We especially want to thank Christian
Rovner, who made tangible progress each week for eight weeks. It is no
lie to feel blessed to have him with SIAI.
Special thanks to these individuals for their effort: Michael Roy
Ames, Joshua Amy, Michael Anissimov, Nick Hay, Manny Halos, Shilpa
Kukunooru, Tommy McCabe, Tyrone Pow, and Chris Rovner.
View Contributions >
http://www.intelligence.org/action/contributions.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
11. VOLUNTEER OPPORTUNITIES
We believe that W. Clement Stone's aphorism "Tell everyone what you
want to do and someone will want to help you do it" will hold true for
our charitable mission.
If you can contribute this year, please email institute@intelligence.org.
View Opportunities >
http://www.intelligence.org/action/opportunities/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
12. WEEKLY VOLUNTEER MEETING
The Singularity Institute hosts a chat meeting for volunteers every
Sunday at 7 PM EST (GMT-5). The Internet Relay Chat (IRC) server is
intelligence.org, port 6667; chat room #siaiv. Each meeting revolves
around planning and action.
More >
http://www.intelligence.org/chat/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
13. Q&A WITH ELIEZER YUDKOWSKY
It seems that you're trying to achieve an AI with the philosophical
complexity roughly equal to or beyond that of, e.g., Mohandas Gandhi,
Siddhartha Gautama, and Martin Luther King, Jr. Do these individuals
represent to you the "heart" of humanity?
What they represent to me are moral archetypes, not just of
selflessness but of moral reason, of moral philosophy. Whether they
were really as good as their PR suggests is a separate issue, not that
I'm suggesting they weren't - just that it doesn't quite matter. The
key point is that we ourselves recognize that there is such a thing as
greater and lesser altruism, and greater and lesser wisdom of moral
argument, and that from this recognition proceeds our respect of those
who embody the greater altruism and the greater wisdom. There is
something to strive for - an improvement that can be perceived as
"improvement" even by those who are not at that level; a road that is
open to those not already at the destination. Anyone who can recognize
Gandhi as an ideal, and not just someone with strangely different
goals, is someone who occupies a common moral frame of reference with
Gandhi, but less advanced in terms of content, despite a shared
structure. So what the statement "Put the heart of humanity into a
Friendly AI" symbolizes is the idea of moral improvement, and the idea
that a Friendly AI can improve to or beyond levels that we recognize
as ideal levels (e.g., the level of a moral philosopher or of Martin
Luther King Jr.).
More >
http://www.intelligence.org/yudkowsky/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
14. SINGULARITY STATEMENT FROM ANDERS SANDBERG
Anders Sandberg, Science Director, Eudoxa:
The research of SIAI is essentially a bold attempt to explore Smale's
18th problem: What are the limits of intelligence, both artificial and
human? (S. Smale, Mathematical Problems for the Next Century,
Mathematical Intelligence, Spring '98.) Developing a theory for how
intelligent systems can improve the way they solve problems has both
practical and theoretical importance. SIAI is also one of few
organisations devoted to the study of general motivational systems and
how they might be designed to achieve desired behavior - another
open-ended issue of great practical and ethical importance.
More >
http://www.intelligence.org/comments/statements.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
15. SINGULARITY QUOTE FROM RAY KURZWEIL
Ray Kurzweil, "The Law of Accelerating Returns," 2001:
http://www.kurzweilai.net/articles/art0134.html?printable=1
People often go through three stages in examining the impact of future
technology: awe and wonderment at its potential to overcome age old
problems, then a sense of dread at a new set of grave dangers that
accompany these new technologies, followed, finally and hopefully, by
the realization that the only viable and responsible path is to set a
careful course that can realize the promise while managing the peril.
More >
http://www.intelligence.org/comments/quotes.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
16. EVENTS - TRANSVISION 2004
The World Transhumanist Association's annual event, TransVision, will
be held at the University of Toronto from August 6th-8th, 2004. The
Singularity Institute is fortunate to be a sponsor of TransVision, and
will have members attending or giving presentations.
Proposal submissions for the conference are being accepted until June
1st. Registration costs range from $100 to $150.
Conference speakers include:
* Steve Mann, Inventor of the wearable computer
* Stelarc, Renowned Australian artist
* Howard Bloom, Author of The Lucifer Principle
* James Hughes, Author of Cyborg Democracy
* Nick Bostrom, Chair of the World Transhumanist Association
* Natasha Vita-More, President of the Extropy Institute
* Aubrey de Grey, Cofounder of the Methuselah Mouse Prize
Transvision 2004 >
http://www.transhumanism.org/tv/2004/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The SIAI Voice is produced by the Singularity Institute for Artificial
Intelligence.
The Singularity Institute for Artificial Intelligence is a 501(c)(3)
nonprofit organization for the pursuit of Friendly AI and responsible
intelligence enhancement - a mission of immense potential. Since
intelligence determines how well problems are solved, the responsible
enhancement of intelligence - a safe singularity - will make difficult
problems, such as the prevention and treatment of Alzheimer's and
AIDS, much easier to solve. If intelligence is improved greatly, every
humanitarian problem we face will be more amenable to solution.
Because AI is positioned to be the first technology to enhance
intelligence significantly, SIAI concentrates on the research and
development of humane AI. By solely pursuing a beneficial singularity,
the Institute presents the rare opportunity for rational philanthropy.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For comments or questions, contact us at (404) 550-3847,
institute@intelligence.org , or visit our website:
The movement for a safe singularity advances by word of mouth. If you
believe what we do is valuable, it's vital that you do tell others.
Share the Bulletin >
http://www.intelligence.org/tell-others.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST