From: Eliezer Yudkowsky (firstname.lastname@example.org)
Date: Mon May 31 2004 - 14:25:15 MDT
> posted on wiki too
> 1. Do you see "elimation of negative qualia in a way that does not directly
> conflict with personal freedom" as a possible attractor for collective
> volition? Do you see it as a probable attractor?
> If not why (specifically)?
> 2. How well does the AI need to predict the future in order for all of this
> to work?
See PAQ 5.
> 3. Won't the amount of machine intelligence required to wipe out humanity
> arrive much earlier than the amount of intelligence required to accurately
> simulate entire countries filled with people and other smaller AIs? What is
> your plan for this interval of time?
This is not an issue of computing power. Any FAI, any UFAI, that passes
the threshold of recursive self-improvement, can easily obtain enough
computing power to do either. It is just a question of whether an FAI or a
UFAI comes first.
> 4. FAI initially not programmed to count animals inside its definition of
> collective. However we would like to be people who value all forms of life
> and if we knew better we'd know that they are sentient to a certain extent
> THEREFORE should the FAI give a replicator to each household and forbid
> killing of animals, even if we were opposed to it? (I think so, but just to
> check with you)
This sounds like speculation about outcomes to me. Have you ever donated
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT