Re: Robot that thinks like a human

From: Ben Goertzel (ben@goertzel.org)
Date: Thu May 19 2005 - 19:28:52 MDT


> You claim that you've made finding a workable FAI design the central goal
> of your project, but have you sat down and designed a set of experiments
> that will generate the information you need?

Largely, yes.

> Have you worked out what the
> blocker set of questions you need answers to is?

Largely...

>If your objective is to
> (safely) generate data for FAI design, then you should be planning in
> depth how you're going to go about getting it. Reality will probably
> deviate from the plan, but without the plan you're just groping about
> aimlessly. The only good reason I can think of not to put a core team
> member on this full time is that you don't have enough funding yet.

In fact, we are currently at a very difficult point in terms of funding, and
only two people (none of them me) are working full-time on Novamente AGI.

> I've had someone seriously propose to me that we use limited AI to
> rapidly develop nanotech, which would then be used to take over the world
> and shut down all other AI/nanotech/biotech projects to prevent anything
> bad from happening (things got rather hazy after that).

Hmmm... I think I can guess who ;-)

>I don't worry
> about it because 'highly restricted human-level AGI' is very, very hard
> and ultimately pointless (if you know how to make a 'human-level AGI'
> controllable, then you know how to make a transhuman AGI controllable).
> People less convinced about hard takeoff will doubtless be more concerned
> about this sort of thing.

I am more concerned about this kind of thing than you are. I think
"human-level" is a very vague term, and that someone could make an AI that
was superhuman in enough ways to successfully help them achieve definitive
world domination, yet not self-reflective enough to achieve hard takeoff.

>> Orwell may have been overoptimistic in his time estimate, but his
>> basic point about what technology can do if we let it, still seems
>> to me right-on
>
> You don't need AGI to take over the world, particularly if you already
> have the resources of a nation state or multinational.

Taking over the world quickly enough and well enough to prevent development
of AGI's would be difficult given current technologies, though.

> 'Self-organising' is a bit of a vauge term. It can mean 'self-modifying,
> but with less seed complexity than a fully specified seed AI'

The distinction between self-modification and learning is not well-defined.

>or 'AGI
> in which learning operations are probabilistic and/or stochastic' or
> 'AGI in which self-modification is pervasive, local and without central
> control'. As far as I can tell Novamente falls in the first and third
> categories and may or may not be in the second depending on how much
> you've moved on from the 'bubbling broth' approach.

Novamente never had a "bubbling broth" approach. Webmind sorta did.
Novamente relies centrally on Probabilistic Term Logic.

In Novamente, significant self-modification is done under central control.
But as I said, it's not really possible to distinguish minor
self-modification from learning --- and in this sense, there is plenty of
self-modification occurring in Novamente all the time without explicit
central control.

>Thus the solution combines both (a) and (b), though
> I wouldn't say that what is needed is new mathematics so much as a novel
> means of describing AGI functionality that allows us to tractably apply
> the formal methods we've already got.

Well, it's an interesting idea and I'll be eager to see some details
someday...
> This is where we part ways again. I think that the best approach is to
> develop progressively more advanced AI systems, using each existing system
> as a formal prover to develop the next design.

Egads! How much work have you guys ever done with real formal theorem
provers?

I worked a bit with HOL and Otter, and boy do they suck, though they're
great by comparison to the competition....

Unfortunately, it seems that making theorem-provers that really work
probably requires a fairly high level of AGI....

I agree that IF you can get around this problem and make a nonsentient,
highly specialized but highly powerful theorem-proving AI, THEN this is the
best route to FAI.

I just don't believe it is possible....

>> 1) toddler AGI's interacting with each other in simulated environments,
>> which will pose no significant hard-takeoff danger but will let us begin
>> learning empirically about AGI morality in practice
>
> This makes no sense if your AGI does in fact possess a causally clean
> goal system. 'AGI morality' should be something you inscribe on your
> code (or startup KB);

Yah, but since in reality not all system operations can be directly chosen
based on the goals, there is still some teaching needed, to be sure the
system correctly learns how to combine its explicitly goal-oriented
higher-level control with its lower-level goal-regulated but not
in-detail-goal-dictated activity...

>> Note that these two goals intersect, if you buy the Lakoff-Nunez argument
>> that human math is based on metaphors of human physical perception and
>> action.
>
> How does that require interaction between AGI instances? The only bit of
> maths that requires an idea of other intelligences is game theory.

What understanding of human math requires, according to Lakoff and Nunez, is
not interaction with other intelligences but rather embodiment, or at least
the intuition about simple "prepositional" type relationships that humans
gain from embodiment...

>> Once we have 1 and 2 we will have much better knowledge and tools for
>> making the big decision, i.e. whether to launch a hard-takeoff or
>> impose a global thought-police AGI-suppression apparatus....
>
> Are you going on the record saying that AGIRI will take it upon themselves
> to try and implement (2) ?

Indeed, at the moment we are already experimenting with simple
theorem-proving using Novamente. It's not our current top priority though.

>> I have a pretty good idea what I'm looking for, in fact. I'm looking for
>> dynamical laws governing the probabilistic grammars that emerge from
>> discretizing the state space of an interactive learning system. I have
>> some hypotheses regarding what these dynamical laws should look like.
>
> I could make an educated guess at what you mean by this, but experience
> has taught me to avoid guessing about other people's AGI ideas if
> possible. Would you care to expand?

I will, but later... I've spent enough time emailing this morning ;-)
 [morning because I'm in Australia at the moment]

>> Which makes the overall behavior less easily predictable...
>
> Technically, yes. Practically, not necessarily, because a good design
> should be able to enforce absolute constraints on the 'freedom of
> action' of local dynamics that prevent them from altering anything
> important.

That is hard, if the local dynamics are organizing the knowledge base that
guides the high-level goal system's interpretation of important terms and
concepts

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT