English translation of French article about SIAI

From: Edmund Schaefer (edmund.schaefer@gmail.com)
Date: Mon Nov 22 2004 - 13:56:24 MST


This is just a quickie translation I put together, so it retains a bit
of French pompousness and awkwardness in parts (you'll quickly notice
the French passion for really long sentences), but at least it's fully
grammatical and thus easier to read than what the Babelfish puts out.

Original URL:
http://www.automatesintelligents.com/labo/2004/sep/singularity.html

L'Institut pour la Singularité
Singularity Institute

by J. P. Baquiast and C. Jacquemin 19 September 2004
translated from the French by Edmund Schaefer 22 November 2004

We don't try in this review to ignore a phenomenon that some consider
maybe marginal, but which to us appears revealing of that which will
undoubtedly become a tidal wave, if humanity doesn't break down by
then in physical and intellectual underdevelopment: the likelihood, in
a relatively short time, of super-intelligences and of post- or
transhumans. Lots of people are talking about it, mostly in the United
States. Some in terms as scientific as possible, others mixing without
hesitation science fiction and dreams that are more or less New Age
or sectarian.

Among those that offer a perspective on it that could be
scientifically qualified, we must announce the existence of a project
launched by some young scientific Americains that are certainly a bit
visionary, destined to produce an artificial intelligence (AI) of a
new type, able to renew the bases of human intelligence and to
self-improve itself almost automatically. The promoters of this
project have created an Institute, very modest with regard to members
and means, but endowed with immense ambition, the Singularity
Institute, [Singularity Institute]. This Institute wants to be
entirely visible across the web. It offers incidentally a mailing
list, to which we subscribed ourselves, with nothing to lose, for the
evolution of the project. The best way to study this is to go to the
site intelligence.org which proposes a very complete dossier concerning
the motivations and the objectives pursued. The authors display a true
desire to share knowledge, which is necessary to praise in an epoch
where all the world barricades itself behind copyrights. Admittedly,
they seek by doing this to procure memberships (and material support),
but they take the reader seriously in supplying him the maximum of
possible explainations, opportunely for a theme that isn't easy. If
all university professors made this much effort to go to the front of
the public, in easily understandable terms, we wouldn't be at the
level of scientific inculture at which we find ourselves.

The collection of this work, which is considerable, seems to discharge
from the original intuitions of a very young researcher, partially
self-educated, Eliezer S. Yudkowsky, of whom one can only admire the
precocity and penetration. One of our correspondants, Jacque de
Pasquier, informed us of the French translation, published on the site
Transition, of an article of his, written mainly in 1996, that is to
say at the age of 16: Staring into the Singularity:
http://dtext.com/transition/yudkowsky/yudkowsky1.html. One finds the
principal elements of this article, presented in a scientific fashion,
in the pages published by the site. Eliezer Yudkowsky is today
employed full-time by the Singularity Institute. He develops
conceptions there that will drive in the near future to the take off
of the aforementioned computer engineering project.

Of what does the project consist?

First, what is the Singularity? The page [What is the Singularity]
http://www.intelligence.org/what-singularity.html defines it as the
creation by technological means of an intelligence more than human, or
super-intelligence. The authors of the project don't innovate on this
point. They continue the previsions made by American futurist
information technology specialists that we've often cited in our
review: Ray Kurzweil, Hans Moravec notably. The coming years will see
(excluding catastrophe) the capacities of components, networks, and
software continue to augment to the rythm summarized by Moore's famous
law (performance doubling every 18 to 24 months and corresponding
prices falling). It follows that computer and robots, provided such
resources, will be possible in a few years (10 to 15 years),
theoretically, of performances at least equal to that of the current
human brain.

Moreover, starting from a certain concentration of resources in time
and space, one will see probably the organization of self-improving
phenomena uniformly accelerated. In the material domain will appear
mixed systems, artificial and biological, able to remedy themselves
only of their defects, to repair themselves, and later to assemble
themselves, as life knew how to do after millions of years of
evolution. It will be the same for software. Successive generations of
programs will appear in rapid rythm, constantly better adapted,
constantly more improved in cognitive contents. In other words, the
slow rythms of genetic evolution and natural culture will be replaced
by continuously accelerated artificial evolution. Hence the term
Singularity. Just as, following the cosmological singularity that
preceded the Big Bang, the universe as we know it developed in a few
billions years, likewise, following this new founding event that will
be the technological Singularity thus described, a new type of
markedly artificial evolution will appear on Earth, before scattering
eventually in the Cosmos.

What then for humanity? In good logic, the humans will not remain at
their current mental and phsyical level. They will be able to join the
events happening following the artificial Singularity, since they will
be able to obtain bodies and brains of considerably augmented
capacities. Hence the concepts of posthumanity or transhumanity. The
pessimists fear that psychologies, determined by still-unchanged
heredity, will not improve as far, which will open rather sinister
perspectives, not only for humanity but for life on Earth. For the
optimists, on the contrary, humans thus augmented will apply their
super-intelligence and their super-ordinary forms to the improvement
of life on Earth, to the benefit not only of the whole of humanity to
the whole of living species and the great ecological equilibria.

But, for this favorable result to be conceivable, it will be necessary
for humans to use some common sense. If the evolution of primary
technologies appears in the classical Darwinian style, that is, in the
style of chance and selection, all sorts of psychological and physical
organizations might appear. If, on the contrary, humans try from
maintaining the orientation of evolution in function of values that
they consider must be saved or made to appear, posthumanity could mark
progress compared to present humanity.

It is consequently necessary, without waiting, to engage in the way of
practical work, taking hold of the fact that technological evolution
doesn't wait but on the contrary accelerates, as indicated above. It
is here that the proposals of the Singularity Institute intervene.

Two domains of research are already open: that of physical systems:
electronic components, networks, diverse materials; and that of
software. The first will largely call upon nanotechnologies and later
quantum computers. But the investments for producing auto-adaptive and
auto-reproductive systems will be considerable, out of reach of small
organizations. In the domain of software, with computational resources
relatively reduced, it is on the contrary possible to develop more and
more ambitious applications and systems. Several technologies could be
used to this end: direct interfaces between brain and machine, genetic
engineering permitting the obtainment of more efficient brains. They
will improve the speed of information processing, the number of the
active neurons, the extent of connections between cerebral lobes, the
performance of sensorimotor input-output. But seeing the risks and
difficulties that these technologies present today, one conceives that
the promoters of the project prefer to stay with AI. It is
nevertheless necessary that this will be truly in breaking with the
current AI, very centered on utilitarian applications of an industrial
nature. It's necessary to define an AI that can become as budding, as
auto-complexifying as natural intelligence -- but this in a time of
several years and not steps to the course of a process of several
millions of years. Several technologies exist at the interior of AI.
It seems that the authors of the project favor those of multi-agent
systems incorporated on networks of microcomputers. It's the easiest
to implement.

Work of this nature obliges, as we have known for a long time, to
question in profoundness what intelligence and consciousness are in
nature, in order to make what follows better than what precedes. AI
being a creation of human intelligence, it can thus in turn improve
human intelligence, in a cycle repeated without end. This supposes an
analysis of what intelligence is today. The page
http://www.intelligence.org/LOGI/ proposes the first elements of such an
analysis, in distinguishing the principal levels hierarchically
encased, relative to the treatment of elementary information, of the
sensorial messages, of concepts, of ideas and finally of reasoning or
discourse. This has nothing in itself of originality, but that which
is interesting is the fashion in which the type of AI proposed could
improve natural cognitive processes.

Can human intelligence today conceive of an improved, or simply
different, intelligence? More generally, can it conceive of an
improved society, such in complexity as in functionalities and
rendered services, by relation to society today? The exercises of
science-fiction show therein a distressing lack of imagination. One is
restricted to extrapolate to the ridicule of current features. The
pre-hominid primates would not have been able to imagine our
contemporary society, nor would our grandfathers for that matter. It
is therefore necessary to put in place an auto-adaptive device of
development that permanently revises its ambitions and its means in
function of continually obtained results.

We will not go into the details of the method being implemented by the
work of the Institute. The page
http://www.intelligence.org/LOGI/seedAI.html specifies it and the reader
will have to adjourn to there. One can translate [Seed AI] as
auto-generating or seminal AI, in the sense that its developments give
rise to themselves through repetition of experience.

Let us add that the promoters of the project often insist that these
developments remain controled by a volition constantly readjusted to
humanism. It's necessary to make an AI friendly, or [friendly]. This
aims to disarm increasingly frequent criticisms expressed with regard
to uncontrolled devlopment of technologies and systems, whether it is
in regard to nanotechnologies, robotics, or bionics. The page
[Friendly AI] http://intelligence.org/friendly/ describes in great detail
the technical and functional specifications of the project. We leave
the reader to the computer scientists. On the qualifier of [friendly],
friendly, one can't help but be a bit sceptical. Nothing is ever
absolutely friendly in the world, including software. There's always a
bit of predation mixed in. But one cannot deny to the authors of the
project a displayed will of sharing knowledge, put to the service of a
certain number of objectives aiming to improve inter-human relations.

The site is constantly in evolution and refinement, which gives much
to think about concerning the strong work ethic of its principal
author, E. Yudkovsky [sic]. The latest text available to the date of
writing of May 2004 is entitled [Collective Volition]
http://intelligence.org/friendly/collective-volition.html. The author
anounced he prefers from now on, to the term of [Friendly AI], that of
[Friendly Really Powerful Optimization Process]... which needs no
translation.

What to think of it?

The sceptics will see all of this to be an illusion of some
impassioned youths, to be a machine for acquiring a bit of fame and
money, to be one of many products of an intoxication campaign looking
to convince the world that America continues to lay out a substantial
intellectual advance permitting it to claim leadership of the world.
We will not give way to these facilities. Due to lack of time and
means, we don't pretend to appraise the technical quality of these
documents and abundant information, further than the first
impressions which appear promising. It would seem to us nevertheless
necessary to look there more closely because, as we said, the
enterprise could take a great range, scientific but also political.

The project consisting of attempts to develop an advanced or very
advanced version of AI appears excellent to us, and to come at its
time. AI today takes pleasure in research that is compartmentalized,
utilitarian, without care to communicate with the public, entirely
without vision. This is especially the case in France. Reading the
documents supplied by the Singularity Institute represent in this
regard a true fountain of youth. We realize this could be a great AI
program able to optimize constantly enriched resources provided by
technology. We see equally that such a great program would undoubtedly
require a considerable budget. Many small teams working in network
could rapidly obtain important results, as long as they had made good
organizational decisions. We also think that at least at the
beginning, much work would have to be made benevolently by the
programmers putting some resources in common in the grid style,
parallel to their professional activities. If it will have to still
wait for public or private financing to commence working, nothing
would ever be done. That's a little lesson gained from the example
given by the Singularity Institute.

But, to suppose that in Europe (or even in France) some AI specialists
are interested in such an initiative, what should they do? Two views
are possible, after a serious evaluation of scientific loading of the
steps and documents proposed by the Institute.
- to make contact with E. Yudkowsky's team, as the site invites, and
negotiate a possible collaboration (necessarily in teleprocessing)
- on identical specifications or very close relations, develop only
original solutions, which does not impede upon maintaining contact
with the Institute.

Let us add, for those who are not specialists in the developments of
AI in the United States, that various researchers have undertaken for
several years, in the academic framework, developing ambitious
versions of AI. It is also, evidently, a gamble for the security and
defense systems, but information is rarely available.

In any event, if some of our readers want to deepen this perspective
and wish to make it known, we would be happy to publish their comments
and proposals.

© Automates Intelligents 2004



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT