From: Peter Voss (email@example.com)
Date: Tue Jun 01 2004 - 13:09:57 MDT
Eliezer, let me try to summarize your current thinking (without commenting
on the feasibility), to see if I understand the essentials:
1) Seed AGI is too dangerous, and its development should be shelved in favor
of what you propose.
2) You propose building a non-AGI, non-sentient system with the goal and
a) extract a coherent view from humanity of what humans would want if
they were more 'grown up' - ie. were more rational, used more/better
knowledge in their decisions, had overcome many of their prejudices and
b) recurse this analysis, assuming that these human desires had largely
c) Continue to improve this process until these computed human wants
converge/ cohere 'sufficiently' -- only then implement strategies that
ensure that these wishes are not thwarted by natural events or human action.
Is this essentially correct?
--- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.692 / Virus Database: 453 - Release Date: 5/28/2004
This archive was generated by hypermail 2.1.5 : Mon May 20 2013 - 04:00:46 MDT