Re: Paperclip monster, demise of.

From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Wed Aug 17 2005 - 23:40:16 MDT


Richard Loosmore wrote:
>
> Like the following comment from Michael Roy Ames:
>
>> You are positing one type of AGI architecture, and the other
>> posters are positing a different type. In your type the AGI's
>> action of "thinking about" its goals results in changing those
>> goals to be quite different. In the other type this does not
>> occur. You suggest that such a change must occur, or perhaps
>> is very likely to occur. You have provided some arguments to
>> support your suggestion but, so far, they have all had big
>> holes blown in them. Got any other arguments to support your
>> suggestion?
>
> Patronizing BS. I have watched holes get blown in arguments I
> never made, about systems that I was not referring to (and which
> are probably too trivial to be worth investigating, but that is a
> side issue), by people who persistently fail to read what I have
> actually said, or make an effort to understand that what I have
> said.
>
> If you really insist on characterizing it as "my" type of AGI vs
> everyone else's type of AGI, that is fine: but I am talking about
> a more general type of AGI, as I have been [ranting] on about in
> this message.
>

Patronizing? Certainly not intended.

Having holes blown in arguments that were never made is an occupational hazard of debaters, eh? :) Never mind, we can just move on to the arguments you intended to make but that did not get understood. Perhaps misunderstanding was due to the differing backgrounds of the debaters, and differing assumptions.

One of the differing assumptions is how a goal system for an AGI will be designed and what its functionality will be. There is no right or wrong way to assemble a goal system, only different ways that produce different results. Several of the people who replied to your posts are writing about the kind of AGI that produces a predicable outcome, one that is specified (more or less) by the goal system and its content. I too study and promote this area of design because I want to be sure that AGI will be Friendly (see www.intelligence.org/friendly for a high level description). I agree with you when you write that there are many other types of goal systems that have been and continue to be researched in the community. But unless they have the property of guiding the actions of an AGI to produce predictable outcomes, then... what use are they to pre-singularity humanity? More to the point, an unpredictable AGI could pose a real danger to us all. Will your "more general type of AGI" (as you describe it) help to secure a safe future?

Michael Roy Ames
Singularity Institute For Artificial Intelligence Canada Association
http://www.intelligence.org/canada



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:01 MST