From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Thu May 19 2005 - 19:01:36 MDT
Ben Goertzel wrote:
> Michael, I don't disagree with this. What I object to is the implication
> that SIAI is the only group that takes these issues seriously.
The SIAI is the only group that is taking these issues seriously /enough/.
The SIAI has made Friendliness its primary research objective and has
dedicated 100% of its self-funded research effort to solving that problem.
You claim that you've made finding a workable FAI design the central goal
of your project, but have you sat down and designed a set of experiments
that will generate the information you need? Have you worked out what the
blocker set of questions you need answers to is? If your objective is to
(safely) generate data for FAI design, then you should be planning in
depth how you're going to go about getting it. Reality will probably
deviate from the plan, but without the plan you're just groping about
aimlessly. The only good reason I can think of not to put a core team
member on this full time is that you don't have enough funding yet.
> For one thing, it is clear to me that world domination using a
> human-level but highly restricted AGI (together with other advanced
> technologies) is possible...
I've had someone seriously propose to me that we use limited AI to
rapidly develop nanotech, which would then be used to take over the world
and shut down all other AI/nanotech/biotech projects to prevent anything
bad from happening (things got rather hazy after that). I don't worry
about it because 'highly restricted human-level AGI' is very, very hard
and ultimately pointless (if you know how to make a 'human-level AGI'
controllable, then you know how to make a transhuman AGI controllable).
People less convinced about hard takeoff will doubtless be more concerned
about this sort of thing.
> Orwell may have been overoptimistic in his time estimate, but his
> basic point about what technology can do if we let it, still seems
> to me right-on
You don't need AGI to take over the world, particularly if you already
have the resources of a nation state or multinational. There are several
highly disruptive technologies coming up that could potentially do it,
particularly in combination. I don't worry about this either because
there isn't much I can do about it.
> It may happen (I hope not!) that investigation of FAI reveals that
> it's so hard that the only way to avoid non-Friendly superhuman AI
> is to enforce a global dictatorship that forbids AGI research.
Both the futility and the unlikeliness of this have been previously
discussed at length on SL4.
> I don't agree, but I don't see what I would gain by posting to this
> list a detailed plan for how to use human-level but highly restricted
> AGI to achieve world domination.
So you too amuse yourself in the shower by thinking up fiendishly
intricate plans for world domination? Alas the SIAI people tell me
that this is a bad habit that I will have to kick ;)
> This is where we disagree. There is a lot of middle ground between
> experiments with individual system components (which we've already been
> doing for a while) and "random-access interaction pattern over every
> single functional component" ....
Yes, there is. Technically speaking the implementatin project I'm working
on now falls into that middle ground. However that's a commercial product
and a proof of concept, not an exploratory prototype. I believe that
exploratory prototyping of anything above small components is neither
necessary nor advisable. I'll make a serious attempt to convince you of
that if and when I have a nonesoteric demonstration.
> The relevant questions pertain to what the dynamics of an AGI system
> will be when put into various complex situations involving interaction
> with other minds.
Actually agent modelling and game theory seem to be one of the less
complicated parts of AGI to me; the latter isn't strictly speaking AGI
at all, though it's an AGI competence. This is exactly the sort of
thing that you should be able to fully specify in advance; not the
accuracy of the AGI's external agent model, which is essentially a
performance issue (unless you're silly enough to introduce reasoning
pathologies that prevent the modelling of whole classes of cognitive
architecture), but the important issue of what it will do with the
outputs of those models.
> So we then need either;
>
> a) a design for an AGI that is not a complex, self-organizing
> system, but is more predictable and tractably mathematically modelable
> b) a really powerful new mathematics of (intelligent) complex,
> self-organizing systems
'Self-organising' is a bit of a vauge term. It can mean 'self-modifying,
but with less seed complexity than a fully specified seed AI' or 'AGI
in which learning operations are probabilistic and/or stochastic' or
'AGI in which self-modification is pervasive, local and without central
control'. As far as I can tell Novamente falls in the first and third
categories and may or may not be in the second depending on how much
you've moved on from the 'bubbling broth' approach. Designs produced by
the approach the SIAI advocates will not be any of these things.
However all seed AIs use radical self-modification and thus the problem
of proving that abstract constraints hold for highly complex functional
mechanisms remains. Thus the solution combines both (a) and (b), though
I wouldn't say that what is needed is new mathematics so much as a novel
means of describing AGI functionality that allows us to tractably apply
the formal methods we've already got.
> It may be that the best way to achieve b is to create a
> "specialized AGI" whose only domains are mathematics and scientific
> data analysis. The question then becomes whether it's possible to
> create an AGI with enough power to help us rapidly achieve b, yet
> without giving this AGI enough autonomy and self-modifying capability
> to achieve a surprise hard takeoff.
Now this is the interesting part. I am very much in favour of this; it
seems to me that the difficultly of deliberative design increases
steadily as you go up from limited domain non-general AI to a seed FAI.
Furthermore I don't think general intelligence is necessary at all to
produce a really useful tool.
> I believe that this is possible to do, for instance using Novamente. I
> don't think it's an easy problem but I think it's a vastly easier problem
> than creating an AGI that remains Friendly through a takeoff.
This is where we part ways again. I think that the best approach is to
develop progressively more advanced AI systems, using each existing system
as a formal prover to develop the next design. The goal is a system powerful
enough to support the design of an FAI, which may range from nothing (if
Eliezer is right and he can design a perfect FAI on paper first time) to
a Bostrom Oracle (if FAI design is only possible for Powers). However I
think that the only safe and effective way to bootstrap this is to use the
same methods to design the initial system by hand. You think that it's
possible and sensible to use educated guessing and exploratory prototyping
to develop an opaque initial system; I think that this is folly (albeit one
I emphasise with, as I held the same views not too long ago).
> 1) toddler AGI's interacting with each other in simulated environments,
> which will pose no significant hard-takeoff danger but will let us begin
> learning empirically about AGI morality in practice
This makes no sense if your AGI does in fact possess a causally clean
goal system. 'AGI morality' should be something you inscribe on your
code (or startup KB); it isn't something that 'emerges' unless the goal
system has highly unstable definitions that don't track specific targets
in reality. The only empirical questions are what are the self-modification
trajectories of various sorts of goal system and how effective is your AGI
at cognitive task of predicting how other agents will react. If you are
truely in the position of an 'experimental behaviourist', trying out opaque
configurations to see what behaviour comes out, you are sunk before you
begin.
> Note that these two goals intersect, if you buy the Lakoff-Nunez argument
> that human math is based on metaphors of human physical perception and
> action.
How does that require interaction between AGI instances? The only bit of
maths that requires an idea of other intelligences is game theory.
> Once we have 1 and 2 we will have much better knowledge and tools for
> making the big decision, i.e. whether to launch a hard-takeoff or
> impose a global thought-police AGI-suppression apparatus....
Are you going on the record saying that AGIRI will take it upon themselves
to try and implement (2) ?
> I have a pretty good idea what I'm looking for, in fact. I'm looking for
> dynamical laws governing the probabilistic grammars that emerge from
> discretizing the state space of an interactive learning system. I have
> some hypotheses regarding what these dynamical laws should look like.
I could make an educated guess at what you mean by this, but experience
has taught me to avoid guessing about other people's AGI ideas if
possible. Would you care to expand?
> I don't believe it's computationally feasible to have every last
> computation done in the AGI system follow directly from the goal system.
I agree that goal-relevance is a guess and it's not possible to make
that guess on an individual basis for every computation, and the bulk
of computation is local and not deliberative in the decision sense...
> So it becomes a matter of having the goal system regulate some more
> efficient but less predictable self-organizing learning dynamics.
...but I consider 'self-organising' to be harmful and 'regulate' to
be a poor cousin to 'specify'.
> Which makes the overall behavior less easily predictable...
Technically, yes. Practically, not necessarily, because a good design
should be able to enforce absolute constraints on the 'freedom of
action' of local dynamics that prevent them from altering anything
important.
> The efficiency workarounds seem to inevitably increase unpredictability.
Again it all comes down to having a mechanism that can prove specific
constraints hold for the behaviour of complex (possibly self-modifying)
systems.
* Michael Wilson
.
___________________________________________________________
Yahoo! Messenger - want a free and easy way to contact your friends online? http://uk.messenger.yahoo.com
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:56 MST