From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Aug 19 2005 - 17:46:54 MDT
Michael Vassar wrote:
> The way plus a Friendly goal does tell you what to eat, who to sleep
> with, etc, though the usual answer is "the expected value of formal
> consideration is low, follow habit". The art of rationality is about
> deciding the state of the world AND the correct action to take; with the
> latter taking precedence over the former.
Friendly AI is a term of art that applies to AIs only. There is no such thing
as a Friendly human.
Expected utility maximization can tell you the correct action to take relative
to a utility function. Whether expected utility maximization is 'rational' is
a less open-and-shut question than whether Bayesian probability theory is
rational.
Occasionally people who know very little about real-world cults start going on
about whether the Singularity / SIAI / SL4 / etc. is a cult. The cure I would
prescribe to them is to spend three days with the Raelians. But there's also
the question of how it is that SL4 *avoids* becoming a cult, which, contrary
to what people think, takes an effort. Cultishness is a natural human impulse
and a natural condition into which groups slide - a high-entropy state, as
'twere, requiring a continual expenditure of work to keep out of.
Part of that is that you keep your special expertise special. I don't tell
people what to eat, who to sleep with, how to dress, how to cook food, where
to go to school, who to associate with, what language to speak, which websites
to visit, etc. etc. It's bad enough that I go around telling people how to
update their probabilistic beliefs about questions of simple fact.
It is also a good idea to avoid buzzwords.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT