RE: FAI means no programmer-sensitive AI morality

From: Ben Goertzel (
Date: Sun Jun 30 2002 - 00:04:36 MDT


> > Modern scientific realism is based on accepting outside reality,
> > observations of physical reality, as the fundamental
> determinant of truth.
> >
> > On the other hand, many other traditions are based on accepting *inner
> > intuitions and experiences* as the fundamental determinants of truth.
> It is quite possible to accept both as determinants of truth,
> perhaps in different domains or aspects of a reality that is
> large enough to include both and more. There is no reason one
> must be only in one camp or the other. Or even look at it as
> two different and utterly incompatible camps.

This is correct, of course.

> These inner intuitions and experiences are by no means just
> individual subjectivism with no commonality. The writings and
> teachings of mystics across many traditions, cultures and times
> show a great deal of similarity below the particular cultural
> and relgiously imposed coloration of their experiences. Now, it
> is certainly possible this similarity is one of similar brains
> reacting similarly under certain practices and tensions.

yeah. Aldous Huxley's "The Perennial Philosophy" is the classic enunciation
of this point, though other texts make it more systematically and
scientifically, such as Allan Combs' book "Radiance of Being" from a few
years back

> I am very familiar with yoga and have been involved in it at
> some level of practice for some time. I have at least a reading
> acquantance with other traditions. I find yoga practices and
> philosophy excellent training of the mind and emotions and it
> gives me a centering that I cherish. Does that make me less
> able to appreciate and participate in science and scientific
> dicussions and activities? No.

I agree, I didn't mean to say there was an either/or thing going on here.

I have no big conflict between my Zen side and my scientific side. They
coexist and accept each others' existence...

> I don't see that at all. I see him going out of his way to not
> impose his standards on what the outcome will be or even claim
> that he knows what the ultimate shape of it will be. He
> actually says that it is impossible to impose his standards on
> the outcome even if he wanted to if this thing is a real AGI.
> That looks right to me.

I agree, it is impossible for any of us to impose our standards on an AGI in
a long-term way.

It's hard enough for us to impose our standards on our own human children!
And our children remain at a human intelligence level, share our same
species, etc.

However, the initial standards with which we supply an AGI, may impact the
long-term trajectory of its development.

> I certainly don't belief it is fundamentally wrong. Science is
> perfectly valid and wonderful within its built-in applicable
> domain and limits. I simply don't believe its boundaries are
> the end of the real. Starting an SAI within those boundaries in
> no way keeps it from going beyond them if it finds it needful to
> do so.

I agree completely...

-- Ben G

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT