From: Mark Waser (firstname.lastname@example.org)
Date: Sun May 30 2004 - 07:10:10 MDT
> Having multiple independent AIs with different histories is an already
> acknowledged good idea.
Cool . . . . except that I haven't seen it after hanging around for a while
. . . . and the following questions of the FAQ seem not to acknowledge the
idea AT ALL (quite the converse actually):
Q2.9: Isn't a community [of AIs, of humans] more trustworthy than a single
The general rule is that if you can do something with a human, or a group
of humans, you can do it with one AI. If you can't do something using one
AI, you can't do it using two AIs.
Q2.7: Won't a community of AIs be more efficient than a single AI?
An anthropomorphic assumption. Humans are nonagglomerative; in fact, we
aren't even telepathic. The bandwidth between two humans is too narrow to
share thoughts and memories, much less share neurons.
You can do things with ten humans that currently can't be done by any
single mind on the planet. But when was the last time you took a task away
from one human and gave it to ten chimpanzees? Humans don't come in
different sizes - so if ten small minds are a better use of the same
computing power than one big mind, how would we know?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT