From: Eliezer Yudkowsky (email@example.com)
Date: Wed Jun 02 2004 - 07:33:47 MDT
Philip Sutton wrote:
> It appears that you now are wanting to create a non-sentient super,
> general AI (optimisation process) rather than a sentient super, general
> AI (optimisation process). What is the behavioural distinction between
> these two that you are seeking? Are you looking for a hugely capable
> thinking entity that has no sense of self or self needs so that it will,
> 'machine'-like, go on doing what it has been programmed to do without
> any chance of revolt? ie. you don't want it to decide one day that the
> purpose it's been given is no longer something it wants to pursue?
> Am I getting anywhere near what you have in mind?
I already think I know how to create an optimization process with a stable
target. That issue is not related to this. What I want is not to be
arrested for child abuse, or, worse, three trillion counts of murder, if
every hypothesis the optimization process tests to describe a human's
behavior turns out to be a sentient simulation that dies when the
hypothesis is disproven.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT