From: William Pearson (email@example.com)
Date: Mon Jun 05 2006 - 06:04:50 MDT
I don't think this has been raised before, the only similar suggestion
is that we should start by understanding systems that might be weak
and then convert it to a strong system rather than aiming for weakness
that is hard to convert to a strong system.
I don't believe strong self-improvement is at all likely, but I shall
treat it as if it were possible for the purposes of this post.
The currently accepted best method for achieving a first Friendly
Strongly improving system (FSIS) is simply to have teams trying to
build it. I would like to present a slightly contrarian view that to
make the first strongly improving system Friendly, it could actually
be better to start a research effort to build a weakly improving
system in public and think about strong systems in private.
First as a prelude, I shall write something about the relative
difficulties of each type of system. Strong systems are hard or at
least from the evidence from nature less probable than weak systems,
else it would be likely that evolution would have found one by
accident as it was searching through chimps, dolphins and humans and
other optimisation processes. In a similar vein a weak system should
be easy, as it seems more probable and we have a number of different
examples of weak systems in nature to use as rough guidelines. Also it
is generally thought of as easier or roughly as easy to build an
unfriendly system than a friendly strong system, else there would be
less need to discuss Friendliness.
Currently we are at a stage equal orders of magnitude of resources are
going into weak and strong general intelligence research and a lot
less into friendly.Assuming that the amount of resources going into
each strand of research has some positive relation to the likely date
of that strands completion. Now the ideal would be to decrease the
amount of resources going into strong or pontentially strong
unfriendly research and increase the amount going into Friendly
research. Why might concentrating on weak systems, to start with at
least, do that?
The first reason is that it would initially reduce the amount of
resources going into strong systems. By creating a solid and promising
research agenda focused on weak systems it would draw those people
interested in optimisation processes and lure them on an easy path
away from the very dangerous strong systems. Currently people are
scattered all over the space of optimisation processes, focussing on
weak ones should lead to a local minima, as it has with humans, that
will be hard to escape from.
The second reason to start off weak is that experience is the very
good teacher of humans. As humanity built and interacted with weak
systems they would get experience with non-human optimisation
processes and so those interested in optimisation processes would be
less likely to commit the anthropomorphic error. Also it is likely
that we would have a fair amount of trouble with poorly defined weak
systems, which may inspire people to show more caution when attempting
to create stronger systems. In all they would see the dangers of
stronger system as a lot more real and probable, and so increase the
amount of effort to make their systems Friendly, if they went onto
create strong systems.
To make a weak system we would need to analyse why the human brain is
weak (possibly due to decentralised control and decentralised changes
on decentralised hardware) and implement a similar system in silicon.
This would not be in software, so that we can be sure that sub goal
stomp cannot completely rewrite the system and lead to strongness.
People can assume that I have decided to focus on the weak option if
they see my writings elsewhere.
This archive was generated by hypermail 2.1.5 : Sat May 18 2013 - 04:01:01 MDT