From: Ben Goertzel (ben@goertzel.org)
Date: Thu May 27 2004 - 06:26:30 MDT
Eliezer,
I think I'll cut short the less interesting and more bickery parts of
this thread and focus on the parts with more interesting content ;-)
One comment first, pertinent to your statement that I never really seem
to grok your theory of FAI:
Have you considered making a formulation of your theory of Friendly AI
in a semi-axiomatic form? Not a full mathematical formalism, but a
list of say 10-30 brief verbal propositions, with a clear distinction
between which ones are assumptions and which ones are supposed to be
conclusions from which other ones?
This would make it much easier for me or others to pinpoint which points
we disagree with, or don't understand.
> > Your "external reference semantics", as I recall, is basically the
> > idea
> > that an AGI system considers its own supergoals to be uncertain
> > approximations of some unknown ideal supergoals, and tries
> to improve
> > its own supergoals. It's kind of a supergoal that says
> "Make all my
> > supergoals, including this one, do what they're supposed to
> do better."
>
> No. That was an earlier system that predated CFAI by years -
> 1999-era or
> thereabouts. CFAI obsoleted that business completely.
OK, it may be that I have confused your earlier views with your more
recent views.
Anyway, I just looked it up, and according to CFAI from 2001,
"
External reference semantics: The behaviors and mindset associated with
the idea that supergoals can be "wrong" or "incomplete" - that the
current supergoals are not "correct by definition", but are an
approximation to an ideal, or an incomplete interim version of a growth
process. Under a system with external reference semantics, supergoal
content takes the form of probabilistic hypotheses about an external
referent. In other words, under ERS, supergoal content takes the form
of hypotheses about Friendliness rather than a definition of
Friendliness.
"
Does this represent your current understanding of your concept of
"external reference semantics"?
You say:
> External reference semantics says, "Your supergoal content is an
> approximation to [this function which is expensive to
> compute] or [the
> contents of this box in external reality] or [this box in
> external reality
> which contains a description of a function which is expensive
> to compute]."
and this clarifies slightly what is the "external referent."
One good thing about this idea, conceptually, is that it has the
potential to make the AI's goal system fundamentally social rather than
solipsistic. If the external referent for Friendliness is "my friends'
conception of Friendliness" then the society of the AI's friends is
brought into the dynamics of self-modification.
But there are some obvious problems that caused me never to think this
idea was so important. Not that it's wrong, just that it's a kinda
trivial point.
I.e.: What is the external referent? If it's some kind of formal
description of Friendliness, then all formal descriptions of
nonmathematical concepts seem to have copious loopholes. If it's
something informal like "my friends' understanding of Friendliness" then
there is too much potential for the AI to influence the external
referent, thus making it no longer "external."
As I recall we discussed this long ago and I never got a response that
was satisfactory to me.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT