Re: AI Goals

From: Jef Allbright (
Date: Wed Apr 26 2006 - 11:28:53 MDT

Jef Allbright wrote:

> > Fundamentally, I'm saying that with regard to morality, evolutionary
> > selection prevails (and there's nothing intrinsically nice about that)
> > and I'm also saying that we have reached a level of development where
> > subjective agents can actively and intentionally contribute to the
> > process.
> > - Jef

On 4/26/06, Woody Long <> wrote:

> I'm just looking for something to believe in - a friendly, beneficial SAI
> that I can wholeheartedly support and promote to the general public. We
> have polar opposite visions of the technological singularity: I prefer an
> exclusively Science and Engineering TS, and you prefer a Values Promoting
> TS.

Woody, I am saying that it is necessary that we increase our awareness
of *both* aspects: our evolving subjective values and our evolving
scientific/technical knowledge. In order to make "good" decisions, it
is necessary to understand which direction we want to go, and what
works to get us there.

Both areas of knowledge will benefit from a framework of collaborative
knowledge growth over increasing scale. I envision the
objective/scientific portion doing increasingly sophisticated modeling
based on our values, applying objective data and methods and
principles of growth of dynamical systems.

On 4/26/06, Phillip Huggan <> wrote:

> The problem with treating values as evolutionary instead of objective, is
> that it is easy to get stuck in a broad low maximum. MNT or AGI in service
> of Neocapitalism would be a dystopia. Evolutionary urges can be sublimated.
> I'm not saying that is easy or practical for present generations. But in
> considering a world of the future, sufficiently mature systems of education,
> psychiatry, psychology and social safety nets can make evolutionary
> pressures negligible. IN the long run the speed of light limits brain sizes
> to a few hundred km across (I think), so get used to ultimate limits to
> growth.

Phillip, when I say evolution prevails, I'm not referring to our
innate evolved drives. I'm referring to the universal process of
evolution that we observe at all scales whereby new configurations
arise "randomly" and then selection occurs based on fitness within the
local environment. This process applies to the development of stars
and galaxies, biological life, culture, and we have now arrived at the
unprecedented level where subjective agents can (and should, in the
moral sense) influence the process (at least from their point of

With respect to the speed of light limiting the size of brains, you
have a point that the speed of light is a real constraint on the
structure at that level, but this does not put a limit on growth that
proceeds at higher levels of organization. Think of independent city
states limited by geography and speed of travel, each developing
mostly independently, and then consider the boost to growth that
occurs when they begin communicating and trading. The higher level
structure accompanies an increase in degrees of freedom--growth.

In the interest of list quality, I'm going to retire for now from the
discussion on the public list but welcome ongoing constructive
discussion offlist.

- Jef
Increasing awareness for increasing morality
Empathy, Energy, Efficiency, Extropy

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT