From: Damien Broderick (firstname.lastname@example.org)
Date: Wed May 26 2004 - 11:33:16 MDT
At 07:41 AM 5/26/2004 -0700, Jef Allbright wrote:
>An important concept, or view, appears to be absent from this
>discussion. It is that increasingly, wisdom and decisions of large import
>are the result of group thinking. This goes beyond the obvious
>intentional collaboration on group issues that is fairly well recognized,
>especially in today's business world, as necessary for success. It
>consists also, in ways that are more subtle and pervasive, in the shared
>access to global knowledge and opinion via the media, and in the indirect
>effects of immersion in a common economy and social structure. All of
>these lead to an effective intelligence (resulting in decisions and
>actions) greater than any individual human.
>This is difficult for many of us to see, especially in the West, because
>we are steeped in the illusion of Self as some kind of discrete entity.
>And so we naturally think in terms of enhancing this Self, or creating a
>greater Self to guide us. <snip>
This is a wonderful (and wise) post, Jef. Posts on sl4 (and the extropian
list) almost inevitably overvalue the `sovereign individual', because most
of us are daily oppressed and hampered by the shortsightedness, comparative
ignorance and even mulish stupidity of many of those we have to deal with.
It's too easy to edit out the ceaselessly social nature of consciousness,
choice, perception, and wisdom (or its contraries).
Jef comments that `This goes beyond the obvious intentional collaboration
on group issues that is fairly well recognized', but I'd like for a moment
to pull the discussion back to the most primitive or dyadic level where the
constantly constructed Self hides from itself the moderating and/or
enhancing influence of other selves.
I'm a very smart and rather quick-tempered human living with a very smart
and passionate if perhaps rather less angry person; it is amusing, in a
rueful way, to pause and take note of how in small crises we each vent our
immediate emotional response to a galling situation, while the other
somewhat sits back, sympathetic but rather more disengaged, and eventually
suggests a modified, more effective response to the situation. Sometimes
this merely involves toning down the rhetoric of response, sometimes it
might propose a subtle and satisfactorily stinging countermeasure;
sometimes the very act of echoing back what one has heard or blurted is
enough to improve the wisdom quotient of what eventually gets said or done
in public. This process is prey to dyadic convergence, or complicity, or
even *folie à deux*, but it's startling how often it leads to a better,
wiser outcome (and not just a more constrained, stifled, `polite' one).
A wise AI will surely not be a solitary entity rebooting itself into ever
more arcane levels of some sort of timeless wisdom. It must be a social
entity, too, from the outset. It has to listen to others, and advise
others, and learn from the iterations. The question (it seems to me) is
whether such a being can emerge from algorithms running on a substrate that
lacks the legacy templates that allow us to empathize with each other (to
the compromised degree that we do).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT