Re: AI Goals [WAS Re: The Singularity vs. the Wall]

From: Mike Dougherty (msd001@gmail.com)
Date: Wed Apr 26 2006 - 16:27:46 MDT


I suggest that there will only be one AGI in the same way an ET would refer
to the first human they encounter as being representative of Humanity.

1. Any AGI that follows the first will be viewed as so similar to the first
as to be indistinguishable.
2. Any self-directed AGI would likely be able to recognize another as a
resource in the same way that other humans are limited resources or that the
Internet is a resource. Assuming the interconnect between two AGI is
high-bandwidth and low latency, there is no reason why our communication
with either one of them would not immediately aggregate the knowledge base
of both or them. This aggregation would either happen as a result of our
own double-checking, or they would "compare notes" with each other in an
effort to evaluate answer fitness.

If the AGI is based on a distributed architecture such that any sub-system
is an "expert" at a limited range of knowledge, with the collective whole
being "the AGI" - then conversing with a single sub-system is an unfair
measure of the whole in the way that analyzing a single neuron is an unfair
measure of the function of our entire brain.

Sorry that until this sentence I did not mention "Goal System" or an
acronymic buzzword :)

On 4/25/06, Richard Loosemore <rpwl@lightlink.com> wrote:
>
> Here is another subtle issue: is there going to be one AGI, or are
> there going to be thousands/millions/billions of them? The assumption
> always seems to be "lots of them," but is this realistic? It might well
> be only on AGI, with large numbers of drones that carry out dumb donkey
> work for the central AGI. Now in that case, you suddenly get a
> situation in which there are no collective effects of conflicting
> motivations among the members of the AGI species. At the very least,
> all the questions about goals and species dominance get changed by this
> one-AGI scenario, and yet people make the default assumption that this
> is not going to happen: I think it very likely indeed.
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT