From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Wed Aug 11 2004 - 01:38:00 MDT
Eugen Leitl wrote:
> On Tue, Aug 10, 2004 at 04:36:23AM -0400, Eliezer Yudkowsky wrote:
>
>>Also, Geddes, kindly do not call it "Yudkowsky's arrow of morality" for I
>>never said such a thing.
>
> Speaking of which, kindly stop putting words in my mouth as well:
> to wit: http://www.intelligence.org/yudkowsky/friendly.html
>
> "Eugen Leitl believes that altruism is impossible, period, for a
> superintelligence (SI), whether that superintelligence is derived from humans
> or AIs. The last time we argued this, which was quite sometime ago, and thus
> his views may be different now, he was arguing for the impossibility of
> altruistic SI based on the belief that, one, "All minds necessarily seek to
> survive as a subgoal, therefore this subgoal can stomp on a supergoal"; and
> two, "In a Darwinian scenario, any mind that doesn't seek to survive will
> die, therefore all minds will evolve an independent drive for survival." His
> first argument is flawed on grounds that it's easy to construct mind models
> in which subgoals do not stomp supergoals; in fact, it's easy to construct
> mind models in which "subgoals" are only temporary empirical regularities, or
> even, given sufficient computing power, mind models in which no elements
> called "subgoals" exist. His second argument is flawed on two grounds. First,
> Darwinian survival properties do not necessarily have a one-to-one
> correspondence with cognitive motives, and if they did, the universal drive
> would be reproduction, not survival; and second, post-Singularity scenarios
> don't contain any room for Darwinian scenarios, let alone Darwinian scenarios
> that are capable of wiping out every trace of intelligent morality.
>
> Eugen essentially views evolutionary design as the strongest form of design,
> much like John Smart, though possibly for different reasons, and thus he
> discounts intelligence as a possible navigator in the distribution of future
> minds. (I do wish to note that I may be misrepresenting Eugen here.) Eugen
> and I have also discussed his ideas for a Singularity without AI. As I
> recall, his ideas require the uploading of a substantial portion of the human
> race, possibly even without their consent, and distributing these uploads
> throughout the Solar System, before any of them are allowed to begin a hard
> takeoff, except for a small steering committee, which is supposed to abstain
> from any intelligence enhancement, because he doesn't trust uploads either. I
> believe the practical feasibility, and likely the desirability, of this
> scenario is zero."
>
> I've never said the quotes you attribute to me in "", and you *do*
> misprepresent several things we've talked about.
>
> So kindly pull it from your site. Thanks. (Why did I need to find this
> through Google of all things? Before you write stuff about people, and
> publish it, you ought to notify said people to prevent reactions like this).
Do you want to provide a rephrase of this, or point out particular things
that are misrepresented? Or would you prefer just pulling the entire segment?
(Incidentally, this is from - if I recall correctly - an IRC interview in
2002. I provided the response in realtime, a la interview format, hence
the non-notification. Sorry anyway.)
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:43 MST