Re: Robot that thinks like a human

From: Ben Goertzel (ben@goertzel.org)
Date: Thu May 19 2005 - 16:42:05 MDT


> True. We say that it looks like it is possible, you say that it looks
> like it isn't possible, neither of us have published any formal
> reasoning to support our position. We think you're rationalising, you
> think we're indulding in wishful thinking. For now we can only keep
> working towards proof that resolves the issue one way or the other.

Michael, I don't disagree with this. What I object to is the implication
that SIAI is the only group that takes these issues seriously.

I agree that SIAI is taking a different approach from Novamente or anyone
else to exploring Friendly-AI issues; and that no one, including SIAI, has
made any really convincing arguments why their approach to Friendly-AI
issues is superior.

> Reliable world domination is of the same
> structural difficultly as Friendliness; it's perhaps a little easier to
> specify what you want, but no easier to get an AGI to do it.

I'm not at all sure this is correct.

For one thing, it is clear to me that world domination using a human-level
but highly restricted AGI (together with other advanced technologies) is
possible ... whereas it's not yet clear to me that Friendly AI is even
possible in any strong sense.

Eliezer's writings contain a lot of nice arguments as to why FAI is either
impossible or really, really hard. I have not seen comparable arguments as
to the difficulty or potential impossibility of technology-powered world
domination.

I'm afraid that here again you are indulging in wishful thinking ;-)

Orwell may have been overoptimistic in his time estimate, but his basic
point about what technology can do if we let it, still seems to me right-on

It may happen (I hope not!) that investigation of FAI reveals that it's so
hard that the only way to avoid non-Friendly superhuman AI is to enforce a
global dictatorship that forbids AGI research. Then an aesthetic/moral
decision needs to be made, whether it's better to enforce such a
dictatorship or just let the non-Friendly uber-AI romp and toss the cosmic
coin...

> Even the
> people who think that AGIs will automatically have self-centered human
> like goal systems should agree with this. Anyone foolish enough to try
> and take over the world using AGI, and who manages to beat the very
> harsh negative prior for AGI project success, will still almost certainly
> fail (we'd say by destroying the world, people with anthropomorphic views
> of AI would say because the AGI revolts and rules the world itself or
> discovers objective morality and becomes nice, but still failure).

I don't agree, but I don't see what I would gain by posting to this list a
detailed plan for how to use human-level but highly restricted AGI to
achieve world domination.

I think I'll keep my thoughts on this subject to myself, heh heh heh ;-D

>> IMO a more productive direction is think about how to design an AGI
>> that will teach us a lot about AGI and Friendly AGI, but won't have
>> much potential of hard takeoff.
>
> You don't need to build a whole AGI for that. Any algorithms or dynamics
> of interest can be investigated by a limited prototype. The results of
> these experiments can be fed back into your overall model of how the
> design will perform. AGI is hard to modularise, but if your design
> requires a random-access interaction pattern over every single functional
> component before it displays recognisable behaviour then you are on a
> wild goose chase.

This is where we disagree. There is a lot of middle ground between
experiments with individual system components (which we've already been
doing for a while) and "random-access interaction pattern over every single
functional component" ....

The relevant questions pertain to what the dynamics of an AGI system will be
when put into various complex situations involving interaction with other
minds. If the AGI system at hand is a complex, self-organizing system, then
these questions become extremely hard to resolve via pure mathematics. So
we then need either

a) a design for an AGI that is not a complex, self-organizing system, but is
more predictable and tractably mathematically modelable
b) a really powerful new mathematics of (intelligent) complex,
self-organizing systems

I don't think a is possible.

I think b might be possible but we're nowhere near having it now, and based
on my knowledge of the background of the folks involved with SIAI, I don't
think you guys have much chance of coming up with such a thing in the
reasonably near future.

It may be that the best way to achieve b is to create a "specialized AGI"
whose only domains are mathematics and scientific data analysis. The
question then becomes whether it's possible to create an AGI with enough
power to help us rapidly achieve b, yet without giving this AGI enough
autonomy and self-modifying capability to achieve a surprise hard takeoff.
I believe that this is possible to do, for instance using Novamente. I
don't think it's an easy problem but I think it's a vastly easier problem
than creating an AGI that remains Friendly through a takeoff.

Thus my vision of a way forward is to create

1) toddler AGI's interacting with each other in simulated environments,
which will pose no significant hard-takeoff danger but will let us begin
learning empirically about AGI morality in practice

2) narrow-focused scientist/mathematician AGI's that can help us create the
now-missing math of evolving intelligent systems

Note that these two goals intersect, if you buy the Lakoff-Nunez argument
that human math is based on metaphors of human physical perception and
action. According to this view the toddler AI's are probably a necessary
intermediate stage before you get to the robust scientist/mathematician
AI's.

Once we have 1 and 2 we will have much better knowledge and tools for making
the big decision, i.e. whether to launch a hard-takeoff or impose a global
thought-police AGI-suppression apparatus.... Or conceivably another
alternative will become apparent...

>> I think this is much more promising than trying to make a powerful
>> theory of Friendly AI based on a purely theoretical rathern than
>> empirical approach.
>
> Well, lets face it, experimenting is more fun, less frustrating and
> potentially money-spinning.

In fact I prefer theorizing, on a personal level. But I guess it's a matter
of taste.

And money doesn't interest me very much.

If I weren't convinced building an AGI was highly important, I'd go back to
being a professor, which was a quite comfortable lifestyle, and spend the
rest of my life happily philosophizing, or working on bioinformatics with a
view toward life extension ;-)

>> The Novamente project seeks to build a benevolent, superhuman AGI
>
> Ben, you started off trying to build an AGI with the assumption that it
> would automatically be Friendly,

Certainly not; I read too much SF in my youth to ever have believed that ;-)

> or that at most it would take a good
> 'upbringing' to make it Friendly.

A good upbringing in the context of the right design (which I think I have).

I still believe this may be true, but I'm far from certain of it. I've been
convinced of the wisdom of trying to mathematically validate this intuition
;)

My views have shifted over the years, but nowhere near as drastically as
Eli's....

>This required a different design
> approach, which we initially adopted with trepidation and resignation
> because formal methods had a pretty bad track record in GOFAI. As it
> turned out the problem wasn't formal methods, the problem was GOFAI
> foolishness giving them a bad name, and that design approach was
> actually far preferable even without the Friendliness constraint.

I basically agree with this, which is why one of the key components of
Novamente is Probabilistic Term Logic (a probabilistic formal approach to
learning, memory, reasoning, etc...).

In fact I recall strenuously trying to convince Eliezer of this point back
in 2001 or so.... It may be that my arguments had some impact, though I'm
sure he primarily came to the conclusion from his own direction.

>> a) we think such a theory will come only from experimenting with
>> appropriately constructed AGI's
>
> I don't think you can actually get such a theory from experimenting
> with AGIs unless you know exactly what you're looking for.

I have a pretty good idea what I'm looking for, in fact. I'm looking for
dynamical laws governing the probabilistic grammars that emerge from
discretizing the state space of an interactive learning system. I have some
hypotheses regarding what these dynamical laws should look like. But I
don't know how to prove these hypotheses using pure math....

> Inventing
> a theory to explain the behaviour shown in some set of simple
> experiments will probably be simultaneously easier yet result in a
> theory will a lot of cruft compared to a proper theory of the
> dynamics of causally clean goal systems. If your AGI doesn't have
> a causally clean goal system then it's pretty much a write off in
> terms of our ability to predict the results of self-modification.

My AGI design has a causally clean goal system ---- but, I don't believe
it's computationally feasible to have every last computation done in the AGI
system follow directly from the goal system. So it becomes a matter of
having the goal system regulate some more efficient but less predictable
self-organizing learning dynamics. Which makes the overall behavior less
easily predictable...

The problem is that having every last thought the system takes follow from
the system ubergoal directly, leads one to totally computationally
infeasible designs like ITSSIM. The efficiency workarounds seem to
inevitably increase unpredictability.

I have seen nothing remotely resembling a solution to this fundamental
problem in any of the SIAI literature (or anywhere else).

> Virtually no-one wants to destory the world on purpose,

Unfortunately I know a few counterexamples to that assertion ;-)

yours
ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT