From: Jef Allbright (jef@jefallbright.net)
Date: Tue Jun 13 2006 - 11:01:21 MDT
On 6/12/06, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
> (4) I'm not sure whether AIs of different motives would be willing to
> cooperate, even among the very rare Friendly AIs. If it is *possible*
> to proceed strictly by internal self-improvement, there is a
> *tremendous* expected utility bonus to doing so, if it avoids having to
> share power later.
Eliezer, most would agree that there are huge efficiencies to be
gained over the evolved biological substrate, but I continue to have a
problem with your idea that a process can recursively self-improve in
isolation. Doesn't your recent emphasis on perception being the
perception of difference (which I strongly agree with) highlight the
contradiction and the enormity of the "if" in "if it is *possible* to
proceed strictly by internal self-improvement"?
- Jef
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT