Re: [extropy-chat] Re: Eugen Leitl on AI design

From: Jef Allbright (jef@jefallbright.net)
Date: Fri Jun 04 2004 - 10:15:15 MDT


Eliezer Yudkowsky wrote:

> Jef Allbright wrote:
>
>>
>> Intelligence is one of the pillars of morality. Another pillar is
>> interdependence. Another, even more subtle, is growth.
>
>
> I agree, provided we limit the case to human morality.
>
>> A few of these rational free thinkers sensed that there was still
>> something missing. Rationality is bounded by knowledge, and a new
>> level of enlightenment arose in which people began to realize a need
>> for wisdom within uncertainty. Some of these people were mistaken
>> for mystics, but rather than abandoning rational thought, these newer
>> thinkers worked to incorporate rational thinking into a larger
>> framework that acknowledged, and even welcomed uncertainty.
>
>
> I think you mean "logical thinking" not "rational thinking". Rational
> thinking, in the modern, Bayesian sense of the term, is precisely the
> framework that correctly handles uncertainty. Hence expected utility
> and Bayesian probability. We know exactly how uncertain we are; the
> Way is still a precise art, a dance rather than a walk. (Calmly
> knowing the source of your uncertainty and the rules that govern your
> ignorance is sometimes mistaken for "overconfidence" by those who know
> not the Way.)
>
>> Mathematical statistics (of the frequentist sort and more recently
>> Bayesian) were joined by newer concepts of entropy and theories of
>> information and incompleteness,
>
>
> By "joined", I presume you mean that people (example: E.T. Jaynes)
> showed that the concepts of entropy and information were special cases
> of Bayesian probability theory.
>
>> More recently, concepts of uncertainty and randomness are being
>> overtaken by ideas of chaos and complexity, and rational
>> free-thinkers are discovering some of the inherent limits of modeling
>> and prediction with finite computational resources. We're finding
>> that much of the really interesting stuff can't be modeled or
>> predicted and the only way to determine the end result is to actually
>> play it out. *This changes the focus of the game away from modeling
>> and extrapolation, and towards understanding what freedoms (points
>> of influence) are available to us in order to create an always
>> evolving and unpredictable future.* These new concepts do not
>> replace, but encompass and extend the previous paradigm.
>
>
> The new concepts are special cases of the previous paradigm. The Way
> is yet a precise art.
>
>> I offer this as a necessarily abbreviated and simplified history of
>> the development of rational thinking on the human scale, and also
>> perhaps the development of individual thinking among members of this
>> list growing up within that knowledge environment. Although
>> overstated, perhaps "ontogeny recapitulates phylogeny" applies here
>> as well.
>
>
> What has this to do with AI morality?
>
The key summary statement is near the end, enclosed by asterisks. It
refers to a more practical approach to progress in terms of human
morality.

More explicitly, reinforcing some of my previous messages on this topic,
I suggest that an approach based on modeling/extrapolation followed by
top-down feedback will be found to be impractical, and that real
progress can be achieved via a more bottom-up approach involving better
understanding and facilitating of existing human system dynamics.

I am also suggesting, in the closing segment of my post that you didn't
include or comment on, that the thinking of some smart young idealistic
rational free-thinkers is still in the phase of believing that such a
top-down understanding is both possible and effective, and that as they
gain "context" their world view will develop to a higher level where
interdependence is seen as essential for robust growth.

- Jef



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT