RE: Friendly AI in "Positive Transcension"

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Feb 15 2004 - 13:20:47 MST


Hi,

> > Before responding at all I'm going to make a request of you. Please
> > summarize,
> >
> > FIRST, in a single, clear, not-too-long sentence
> >
> > SECOND, in a single, clear, not-too-long paragraph
> >
> > your own view of your theory of Friendly AI. I will then
> paraphrase your
> > own summaries, in my own words, in my revised essay. I have no
> desire to
> > misrepresent your ideas, of course.
>
> This cannot possibly be done. What you're asking is undoable.

OK, well if you can't even summarize your own work compactly, then how the
heck do you expect me to be able to do so??? ;-)

> Now, please first, in a single, clear, not-too-long sentence, and second,
> in a single, clear, not-too-long paragraph, summarize, to me,
> your view of
> all the fundamental concepts that went into Novamente - not just for me,
> mind you, but for readers completely unfamiliar with your ideas who will
> read my interpretation of your ideas.

At the end of this email I will paste the intro from an overview paper on
Novamente that appeared in the proceedings of the IJCAI '03 conference.
Yeah, it doesn't really get at the essence of the theory underlying the
system, but it summarizes the work in a not-too-misleading way.

Pardon the LaTex ;-)

> I'll be sure to add a similar disclaimer to my forthcoming essay about
> "Novamente: Cyc with activation levels added".

My essay was not principally about your ideas, it was about my own ideas,
and mentioned yours along with those of several other folks.

Regarding Novamente and Cyc, Cyc lacks

-- grounding of concepts in experiential, nonlinguistic data
-- evolutionary learning as a means for concept creation
-- attention allocation dynamics aimed at forming a "moving bubble of
awareness"
-- introspection aimed at having the system analyze its own thought
processes and create new concepts accordingly
-- probabilistic inference spanning multiple subdomains of knowledge
-- learning of procedures (for carrying out external-world actions, or
cognitive actions)
-- even in principle, the ability to modify its own cognitive processes or
data structures

and a large number of other features of Novamente. The differentiation is
not hard to understand.

Cyc consists of a knowledge base of abstract knowledge, and a collection of
reasoning engines running on top of it -- mostly crisp ones, but some that
do probababilistic inference in narrow subdomains. Novamente explicitly
contains much more than that.

> > So far as I can tell, the biggest difference you see between my
> rendition of
> > your views, and your actual views, is that instead of "programming or
> > otherwise inculcating benevolence to humans", you'd rather speak about
> > "programming or otherwise inculcating humane morality in an AI".
>
> You're missing upward of a dozen fundamental concepts here.

Look, any brief summary is going to miss a lot of fundamental concepts.
That is the nature of summary. In summarizing something, one has to choose
what to include and what to leave out.

> First, let's delete "programming or otherwise inculcating" and replacing
> with "choosing", which is the correct formulation under the basic theory
> of FAI, which makes extensive use of the expected utility principle.
> Choice subsumes choice over programming, choice over environmental
> information, and any other design options of which we might prefer one to
> another.

Fine, we can refer to "choosing", while noting that programming and teaching
are the apparently most likely forms of choosing in this context...

> Next, more importantly, "humane" is not being given its intuitive sense
> here! Humane is here a highly technical concept, "renormalized
> humanity".

So far as I can tell this is a fuzzy and ill-defined and obscure concept,
lacking a clear and compact definition

Feel free to give one, or refer me to a specific paragraph in one of your
online writings where such is given.

> It's not that you're misunderstanding *what specifically* I'm saying, but
> that you're misunderstanding the *sort of thing* I'm attempting to
> describe. Not apples versus oranges, more like apples versus the
> equation
> x'' = -kx.

OK, so please clearly explain what sort of thing you're attempting to
describe.

> Your most serious obstacle here is your inability to see anything except
> the specific content of an ethical system - you see "Joyous Growth" as a
> specific ethical system, you see "benevolence" as specific content, your
> mental model of "humaneness" is something-or-other with specific ethical
> content. "Humaneness" as I'm describing it *produces* specific ethical
> content but *is not composed of* specific ethical content. Imagine the
> warm fuzzy feeling that you get when considering "Joyous Growth". Now,
> throughout history and across the globe, do you think that only
> 21st-century Americans get warm fuzzy feelings when considering their
> personal moral philosophies?

Actually, abstractions like that don't give me "warm fuzzy feelings"... but
maybe that's a quirk of my personal psychology. I get warm fuzzy feelings
toward humans and animals, for instance, but not toward abstract principles.

And, I *don't* see abstract ethical principles as being specific ethical
systems, I tried to very clearly draw that distinction in my essay, by
defining abstract ethical principles as tools for judging specific ethical
systems, and defining ethical systems as factories for producing ethical
rules.

I can understand if you're positing some kind of "humaneness" as an abstract
ethical principle for producing specific human ethical systems. It still
seems to me like it's a messy, overcomplex, needlessly ill-defined ethical
principle which is unlikely to be implantable in an AI or to survive a
Transcension.

> The dynamics of the thinking you do when you consider that question would
> form part of the "renormalization" step, step 4, the volition examining
> itself under reflection. It is improper to speak of a vast morass of
> "humane morality" which needs to be renormalized, because the word
> "humane" was not introduced until after step 4. You could speak
> of a vast
> contradictory morass of the summated outputs of human moralities, but if
> you add the "e" on the end, then in FAI theory it has the connotation of
> something already renormalized. Furthermore, it is improper to speak of
> renormalizing the vast contradictory morass as such, because it's a
> superposition of outputs, not a dynamic process capable of renormalizing
> itself. You can speak of renormalizing a given individual, or
> renormalizing a model based on a typical individual.
>
> This is all already taken into account in FAI theory. At length.

Well, I'm not sure I believe there is a clear, consistent, meaningful,
usable entity corresponding to your two-word phrase "humane morality." I'm
not so sure this beast exists. Maybe all there is, in the human-related
moral sphere, is a complex mess of interrelated, largely self-contradictory
ethical systems, guided by some general principles of complex systems
dynamics and by our biological habits and heritage.

-- Ben G

************
\begin{abstract}
The {\em Novamente AI Engine}, a novel AI software system, is briefly
reviewed. Unlike the majority of contemporary AI projects, Novamente
is aimed at artificial {\em general} intelligence, rather than being
restricted by design to one particular application domain, or to a
narrow range of cognitive functions. Novamente integrates aspects of
many prior AI projects and paradigms, including symbolic,
neural-network, evolutionary programming and reinforcement learning
approaches; but its overall architecture is unique, drawing on
system-theoretic ideas regarding complex mental dynamics and
associated emergent patterns.
\end{abstract}

\section{Introduction}

We describe here an in-development AI software system that confronts
the ``grand problem of artificial intelligence'': Artificial General
Intelligence (AGI). This software system is the {\em Novamente AI
Engine}, or more compactly {\em Novamente}.

The Novamente design incorporates aspects of many previous AI
paradigms such as genetic programming, neural networks, agent systems,
evolutionary programming, reinforcement learning, and probabilistic
reasoning. However, it is unique in its overall architecture, which
confronts the problem of creating a holistic digital mind in a direct
way that has not been done before.

The fundamental principles underlying the system design derive from a
novel complex-systems-based theory of mind called the ``psynet
model'', which was developed in a series of cross-disciplinary
research treatises published during 1993-2001
\cite{goe93,goe93b,goe94,goe97,goe02}.
What the psynet model has led us to is not
a conventional AI program, nor a conventional multi-agent-system
framework. Rather, Novamente aims to be an autonomous,
self-organizing, self-evolving AGI system, with its own understanding
of the world, and the ability to relate to humans on a
``mind-to-mind'' rather than a ``software-program-to-mind''
level. The Novamente project is based on many of the same ideas that
underlay the Webmind AI Engine project carried out at Webmind
Inc. during 1997-2001 \cite{goertzel00}; and it also draws to some extent
on ideas from Pei Wang's Non-Axiomatic Reasoning System (NARS)
\cite{wang95phd}.

At the moment, Novamente is partially implemented as a C++ software
system, currently customized for Linux
clusters, with a few externally-facing components written in Java.
The overall mathematical and conceptual design of the system is
described in a forthcoming paper \cite{NovPaper} and book
\cite{NovBook}. While the implementation is not yet complete, the
design has matured throughout the years, and draws upon the many
lessons learned by the authors in the design, implementation and
testing of the Webmind AI Engine. The current, partially-complete
codebase is being
used by the startup firm Biomind LLC, to analyze genetics and
proteomics data in the context of information integrated from numerous
biological databases. Once the system is fully engineered, the
project will begin a phase of interactively teaching the Novamente
system how to respond to user queries, and how to usefully analyze and
organize data. The end result of this teaching process will be an
autonomous AGI system, oriented toward assisting humans in
collectively solving pragmatic problems.
*******************



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT