From: Paul Fidika (Fidika@new.rr.com)
Date: Sat Jan 03 2004 - 19:29:19 MST
Tommy McCabe wrote:
> I am suggesting that the human trait of selfishness
> (ie, goal systems that center around 'I'), which makes
> capitalism necessary, or at least, vastly preferable,
> to any society whose individuals have differing goals,
> is not intristic to minds-in-general, and therefore
> capitalism is not required for minds-in-general.
Selfishness has nothing intrinsically to do with capitalism's success.
Groups of individuals competing for some finite resource or striving toward
some complex goal (selfish or otherwise) form a dynamical system which is
more amendable to self-organization. Humans are innately more selfish than
not, so selfishness acting as a driving-force is sufficient for
self-organization in minds-in-general (particularly humans), but it is
hardly necessary for self-organization in minds-in-general. Capitalism works
because it uses the principles of self-organization, while command economies
explicitly attempt to stomp out all types of self-organization and replace
it with their own rigid systems, which, as evidenced by history, doesn't
work too well. This reminds me of sign-language; the sign-languages that
arise spontaneously are always much more expressive and easier to learn by a
deaf individual than any sign-language system ever created by some board of
"experts."
I expect that transhumans will still use capitalism-esque systems, probably
without the selfishness though, or whatever systems are easily amendable to
self-organization.
Samantha Atkins wrote:
>How do we know that one of the primary design
>goals of the project, recursive self-improvement,
>will actually be what is built? Huh? Our ever
>growing body of knowledge gathered and
>transmitted to each generation is an example of
>using evolutionary selection alone? This seems
>rather strained.
Speaking of which, why is everyone on this list so confident that
recursive-self-improvement will even work? It seems to be tacitly assumed on
this list that the amount of intelligence it will take to create a new
intelligence will increase logarithmically or at least linearly with the
sophistication of the intelligence to be created, but what is this
assumption based upon? For example, if X is some objective and numeric
measure of intelligence, won't it be more likely that it will take X^2
amount of intelligence (either iterating over a certain amount of time or
multiple minds working in parallel) to create an X + 1 intelligence? Or
worse yet, might it take 2^X amount of intelligence to create an X + 1
intelligence? Perhaps 2^2^X amount of intelligence? If so, then there are
very definite limits on how far a super-intelligence will be able to go
before it "runs out of steam," and it might not be all that far.
I don't claim to be offering an argument that recursive-self-improvement
won't work well, but why does everyone here seem to be so convinced that it
will work at all?
~Paul Fidika
Fidika@new.rr.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT