RE: Friendliness, Vagueness, self-modifying AGI, etc

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Apr 07 2004 - 06:58:06 MDT


> Ben Goertzel wrote;
> > I think that documents on the level of CFAI and LOGI are
> important and
> > necessary, but that they need to come along with more
> formal, detailed
> > and precise documents.
>
> Neither document is a constructive account, as I understand
> it for three reasons; risk, time and expertise. The risk
> factor is the most important; handing out detailed blueprints
> for building a seed AI is dangerous even if they're seriously
> flawed.

Yes, you have a good point, as a general principle.

However, I do *not* believe Eliezer is hiding a correct design for an
AGI due to security concerns.

What you are describing is one of the several reasons that Novamente is
not open-source. Right now the code is only moderately closely held
(i.e. it's proprietary but we don't take outlandish security
precautions), but we will get more paranoid about this as more of the
design is implemented and the kinks are worked out.

> Remember that
> this is exactly what Eliezer wanted to do in 2001; before
> that he had the breathtakingly irresponsible plan to
> implement an AGI with /no/ Friendliness at all and just hope
> that objective morality existed and that the AGI found it
> before doing anything terminal.

I'm not sure his plan was THAT extreme as you describe it, was it?

Perhaps it was more like my original plan, which was to rely on TEACHING
exclusively to imbue positive ethics in my AI system. My work has
evolved since, and there are now specific ideas regarding what kind of
architecture is likely to make teaching of positive ethics more
successful (the mind-simulator architecture).
 
> The other issues are that Eliezer doesn't have a lot of time

Hey, that is patently not the case. Eliezer has more time than any of
us, as -- last I checked -- he has no family to take care of, and no
job, and no other interests nearly rivaling the Singularity in
intensity. If he has no time for concrete AI design it's because he has
prioritized other types of Singularity-oriented work.

> and has relatively little actual coding or architecture
> experience.

This one is a good point.

> I'm sure you know Ben that effective
> AGI architecture requires absolute focus and tends to absorb
> your every waking hour; the constant interference of
> financial and personnel concerns with my ability to
> technically direct was one of the factors that killed my
> first startup at the end of last year.

I wish I had absolute focus on AGI these days. I have about 50% focus
on it, but fortunately the other 50% of my working-time is focused on
Novamente-based commercial software projects, so there's not so much
cognitive dissonance.
 
> There are quite a few things in LOGI and a lot of things in
> CFAI I didn't get as far as implementing before sanity
> intervened, so I may just have left out the hard bits. The
> only thing I had a huge amount of trouble implementing and
> failed to get to work in any meaningful fashion is CFAI's
> shaper networks. Admittadly I was trying to generate
> pathetically simple moralities grounded in microworlds, but
> still the concept looks unworkable as written to me.

I believe the "shaper networks" idea is sensible and meaningful, BUT, I
believe that in order to get shaper networks to work, a whole bunch of
intermediary dynamics are needed, which are not touched in Eliezer's
documents. A lot of mind dynamics is left out, and a lot of issues in
knowledge representation.

CFAI defines

"Shaper: A shaper is a philosophical affector, a source of supergoal
content or a modifier for other shapers; a belief in the AI's
philosophy; a node in the causal network that produces supergoal
content."

Essentially, a shaper network requires a workable, learnable,
reason-able representation of abstract content, which allows abstract
bits of uncertain knowledge to interact with each other, to modify each
other, to spawn actions, etc.

So far as I can tell there is nothing in Eli's AI framework that
suggests a knowledge representation capable of being coupled with
sufficiently powerful learning and reasoning algorithms to be used in
this way.

I think this CAN be done with a neural net architecture of some sort,
and in my paper on "Hebbian Logic" I gave a sketchy idea of how.
Novamente takes a different approach, using a "probabilistic combinatory
term logic" approach to knowledge representation, and then using a
special kind of probabilistic inference (with some relation to Hebbian
learning) synthesized with evolutionary learning for learning/reasoning.

But my point is that Eli's architecture gives a grand overall picture,
but doesn't actually give a workable and *learnable and modifiable* way
to represent complex knowledge. Of course it's easy to represent
complex knowledge -- predicate logic does that, but it does so in a very
brittle, non-learnable way. And it's easy to get learning to work on
SIMPLE knowledge; standard neural net architectures do that. But
representing complex knowledge in a way that flexible, adaptive learning
can work on -- that is hard, and that is the crux of making AGI; it is
required for making "shaper networks" work, and it is one thing that
Eli's writings never come close to addressing IMO.

> > B) by the development of a very strong mathematical theory of
> > self-modifying AGI's and their behavior in uncertain environments
> >
> > My opinion is that we will NOT be able to do B within the next few
> > decades, because the math is just too hard.
>
> I'm not qualified to give an opinion on this; I haven't spent
> years staring at it. I suspect that a lot of progress could
> be made if lots of genius researchers were working on it, but
> you and Eliezer seem to be it.

That could be; but a lot of genius researchers are working on related
but apparently EASIER questions in complex systems dynamics, without
making very rapid progress...
 
> ie an existential risk perspective. Developing
> positive-safety takeoff protection is just difficult, not
> near impossible, and is our duty as AGI researchers. I am not
> too worried about Novamente at the moment, but you may well
> hire a bright spark or two who revises the architecture in
> the direction of AI-completeness

;-)

We may revise the architecture in future, but probably not before we
complete implementing the current one and seeing how easy or hard it is
to teach stuff in an appropriate simulated world

Anyway, frankly, you do not know nearly enough about the architecture to
know if it needs revision or not.

If you're interested to become involved in Novamente, you can email me
off-list, of course.

> (I would've volunteered, if
> you'd asked a few months back). I think everyone affiliated
> with the SIAI would be a lot happier if you adopted a
> draconian, triple-redundant and preemptive takeoff prevention policy.

We will do that when the time comes ... Novamente is not at a stage
where this is relevant right now, but it will be in a matter of 1-5
years depending on a number of factors.
 
> I'm
> thus working on an 'expert system' (actually cut-down LOGI,
> but don't tell anyone :) for network security (penetration
> testing and active defence) applications. The field is red
> hot in VC circles right now and things are looking fairly
> promising.

Interesting! I'll email you off-list some stuff about network security
that I wrote about a year ago, when I was thinking about getting into
that area.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT