Re: Manhattan, Apollo and AI to the Singularity

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Aug 25 2006 - 10:08:20 MDT


Richard Loosemore wrote:
>
> This impression would be a mistake. To take just the issue of
> friendliness, for example: there are approaches to this problem that
> are powerful and viable, but because the list core does not agree with
> them, you might think that they are not feasible, or outright dangerous
> and irresponsible. This impression is a result of skewed opinions here,
> not necessarily a reflection of the actual status of those approaches.

I am not familiar with any published approaches to the problem that are
"powerful" and "viable". I include CFAI and CEV in this summary. CFAI
is not powerful and probably not viable; CEV is a statement of goals.

It seems to me that you have a systematic problem with airy references
to literature that exists somewhere only you can see it. Give three
examples.

I fully expect that Richard Loosemore's response will complain about how
dreadfully unfair and unprofessional it is of me to dare say that he has
a systematic problem about anything, and what an awful place the SL4
mailing list is; but he will not, of course, give the three examples. I
am giving this challenge, not in the hopes that Loosemore will respond
with actual examples, but so that everyone else knows that the above
paragraph is, in fact, false - a bluff, to put it bluntly. If Loosemore
was interested in responding constructively, Anissimov asked politely
one day ago, and Loosemore could have chosen to respond to that.

It should moreover be obvious that if Loosemore is *not* bluffing and
wants to decisively win this argument, he can give three examples and
*then* complain about how terribly he's been insulted.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT