From: Peter Voss (peter@optimal.org)
Date: Tue Jun 01 2004 - 13:09:57 MDT
Eliezer, let me try to summarize your current thinking (without commenting
on the feasibility), to see if I understand the essentials:
1) Seed AGI is too dangerous, and its development should be shelved in favor
of what you propose.
2) You propose building a non-AGI, non-sentient system with the goal and
ability to:
a) extract a coherent view from humanity of what humans would want if
they were more 'grown up' - ie. were more rational, used more/better
knowledge in their decisions, had overcome many of their prejudices and
emotional limitations.
b) recurse this analysis, assuming that these human desires had largely
been achieved.
c) Continue to improve this process until these computed human wants
converge/ cohere 'sufficiently' -- only then implement strategies that
ensure that these wishes are not thwarted by natural events or human action.
Is this essentially correct?
Best,
Peter
--- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.692 / Virus Database: 453 - Release Date: 5/28/2004
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT