Re: Manhattan, Apollo and AI to the Singularity

From: Richard Loosemore (rpwl@lightlink.com)
Date: Fri Aug 25 2006 - 12:15:41 MDT


Michael Anissimov wrote:
> On 8/24/06, Richard Loosemore <rpwl@lightlink.com> wrote:
>
>> This impression would be a mistake. To take just the issue of
>> friendliness, for example: there are approaches to this problem that
>> are powerful and viable, but because the list core does not agree with
>> them, you might think that they are not feasible, or outright dangerous
>> and irresponsible. This impression is a result of skewed opinions here,
>> not necessarily a reflection of the actual status of those approaches.
>
> I'm sure that many on the list would be interested in seeing you write
> up your ideas as web page. A search for "Loosemore friendliness" only
> brings up your posts on this list. Don't let the "list core" get you
> down.
>
> The problem about designing motivational systems for AIs is that 99.9%
> of people who attempt it have such a poor conception of the problem at
> hand that they aren't even wrong. For example, see this:
>
> http://www.ethicalvalues.com/
>
> The guy who wrote this probably isn't a kook, and might even come
> across as quite intelligent in person. It's just that his ideas for
> AI are complete nonsense.
>
> I'm sure your ideas aren't, and I know they've been discussed on this
> list before, but it would be nice to see them on a static page.
>

Michael,

It has been, uhh, pointed out to me that you made a polite request here:
  I did not fail to notice it, but I have been busy. I will indeed
write more on this and put it somewhere accessible. My ToDo list is a
little big right now, but I will get to it eventually.

Richard Loosemore.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT