From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Jun 01 2004 - 14:24:46 MDT
Peter Voss wrote:
> Eliezer, let me try to summarize your current thinking (without commenting
> on the feasibility), to see if I understand the essentials:
>
> 1) Seed AGI is too dangerous, and its development should be shelved in favor
> of what you propose.
For our purposes (pragmatics, not AI theory) FAI is a special case of seed
AGI. Seed AGI without having solved the Friendliness problem seems to me a
huge risk, i.e., seed AGI is the thing most likely to kill off humanity if
FAI doesn't come first. If a non-F seed AGI goes foom, that's it, game
over. I think we should be building in takeoff safeguards into essentially
everything "AGI" or "seed", because it's too dangerous to guess when
safeguards start becoming necessary. I can't do the math to calculate
critical mass, and there are too many surprises when one can't do the math.
> 2) You propose building a non-AGI, non-sentient system with the goal and
> ability to:
A Really Powerful Optimization Process is an AGI but non-sentient, if I can
figure out how to guarantee nonsentience for the hypotheses it develops to
model sentient beings in external reality.
> a) extract a coherent view from humanity of what humans would want if
> they were more 'grown up' - ie. were more rational, used more/better
> knowledge in their decisions, had overcome many of their prejudices and
> emotional limitations.
...more or less.
> b) recurse this analysis, assuming that these human desires had largely
> been achieved.
Recursing the analysis gets you a longer-distance extrapolation, but
presumably more grownup people.
> c) Continue to improve this process until these computed human wants
> converge/ cohere 'sufficiently'
Or fail to converge, in which I'd have to look for some other reasonably
nice solution (or more likely move to a backup plan that had already been
developed).
> -- only then implement strategies that
> ensure that these wishes are not thwarted by natural events or human action.
That's phrased too negatively; the volition could render first aid, not
only protect - do whatever the coherent wishes said was worth doing,
bearing in mind that the coherent wish may be to limit the work done by the
collective volition.
> Is this essentially correct?
Minor quibbles aside, yeah.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:38 MST