From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Wed Jun 02 2004 - 07:46:19 MDT
Ben Goertzel wrote:
>
> Philip Sutton wrote:
>
>>Eliezer,
>>
>>It appears that you now are wanting to create a non-sentient super,
>>general AI (optimisation process) rather than a sentient
>>super, general AI (optimisation process).
>
> I'm not sure that his proposed optimization process has to be all that
> *general*. Perhaps it has to be general in its potential applicability,
> but it seems to me that, in accordance with his proposal, it has to be
> applied only to some specific problems, viz
>
> -- inferring the "collective volition" of humanity
> -- estimating which actions are most likely to lead to the fulfillment
> of this collective volition
This obviously requires general intelligence, and a huge amount of power
that can only come from recursive self-improvement. But I suppose that is
what you meant by general applicability but not general application. There
is really no such thing as a "general application"; every optimization
process is doing *something*.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT