Re: AI Goals [WAS Re: The Singularity vs. the Wall]

From: Jef Allbright (jef@jefallbright.net)
Date: Tue Apr 25 2006 - 15:38:46 MDT


On 4/25/06, Woody Long <ironanchorpress@earthlink.net> wrote:
> With regard to developing safe AI, I don't think there can be any
> guarantee. The best we can do is to incorporate a model of human
> values as broad-based as possible, and to promote the growth of our
> evolving values based on principles rather than ends.
>
> - Jef
>
> There is another way. Build a super-intelligent non-biological intelligence
> that is a science and engineering super-expert. This SE Singularity Machine
> would maintain the mega systems of earth such as electric grids, nuclear
> power plants, weather systems, transportation systems, etc., plus actively
> advance all sciences, such as medical science, the space exploration
> sciences, etc.
>
> Thus, the net effect of this SE SM is literally a technological systems
> paradise. The key to building such a friendly, safe-built, SE SM is to
> build it solely and exclusively as a science and engineering super-expert.
> Such a SE Singularity Machine will "know" its expertise to be exclusively
> science and engineering, and will "feel" its sole "prime purpose" to
> exclusively shine in science and engineering. As such, a SE Singularity
> Machine will ALWAYS defer ALL politcal and religious issues to the
> appropriate experts, and get back to science and engineering, which is its
> Exclusive Expertise and sole Prime Purpose.
>
> This is the only kind of friendly AI that I could support at this time, all
> else being too risky.
>

Effectively promoting shared human values into the future requires
effectiveness in two complementary areas:

(1) Subjective (increasingly intersubjective) understanding of our shared values
Humans display a wide array of complex values, but have much in common
due to our common evolutionary history. We have values encoded at the
genetic/developmental level expressed as inate feelings of affection,
joy, disgust, etc., we have values encoded in our culture and
expressed as religious laws, community codes of ethics, and also other
less explicit forms, and we have values encoded in modes of reason and
rationality and supported by ideals such as "truth", consistency,
coherence, and now emerging appreciation of growth and its
constituents diversity, cooperation and competition. Our shared values
are far from arbitrary, having been refined over a long process of
evolution.

(2) Objective (increasingly objective) understanding of what works
Increasingly objective (scientific, instrumental) knowledge of what
works. As with (1), an evolutionary process of increasingly effective
(and we assume, increasingly accurate) models of our universe tends to
increase our success.

We are now approaching the point in human development when we have the
tools available and can take the next step up, above instinctual
morality and above culturally enforced ethics, to an intentional,
rational system of cooperative social decision-making. A
network-based framework that can amplify our awareness of (1) our
shared subjective values and (2) our increasingly objective knowledge
of what works.

Increasing awareness of what works, applied to increasing awareness of
our shared human values, leads to increasingly effective
decision-making that will be seen as increasingly good.

This broad-based framework based on human values, amplified by
technology (including AI) will effectively promote our values into the
future.

Unless we destroy ourselves first.

- Jef
http://www.jefallbright.net
Increasing awareness for increasing morality
Empathy, Energy, Efficiency, Extropy



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT