Re: Destruction of All Humanity

From: micah glasser (
Date: Wed Dec 14 2005 - 22:40:46 MST

Let me clarify what I mean by 'rational agent' because I certainly am not
talking about a thermostat. What I mean is any entity that acts on or
through the environment using tools of any kind, by predicting a possible
future state of affairs and acting to realize the possible state of fairs
which most closly aproximates the nearest goal at hand. That nearest goal
should also be in service to a highest goal. The more rational the agent the
further into the future that super-goal is projected. I stipulate that
greater freedom must be part of that super-goal for any rational agent
because one part of freedom means increasing ones ability to control the
environment and contropl of the environment is a result of growing
effectiveness of system modelling (science) and manipulating that system
according to its fundamental laws (technology). All rational agents must
have this as a goal system because it is required by the definition of
rational agency. A thermostat is NOT a rational agent precisely because it
cannot effectuate a possible state of affaires because it can not model a
system. Yes, it acts according to rational priciples because it was designed
to operate according to a function. But it is NOT an agent. A rational agent
has power becasue it is able to lacate its powers of causality from a higher
state of emergent properties which function as 'top down' locust of
causation. This is in distinction to the only other form of agency which is
the 'bottom-up' causality which opereates according to the lowest level of
complexity in the cosmological systtem and determines the outcome of that
system. I hope I have made myself clear on this but I realize I am probably
just confusing your simple picture.

On 12/14/05, Jef Allbright <> wrote:
> On 12/14/05, David Picon Alvarez <> wrote:
> > From: "Jef Allbright" <>
> > > David makes good points here, but interestingly, as we subjective
> > > agents move through an objectively described world, we tend to ratchet
> > > forward in the direction we see as (subjectively) good. Since we are
> > > not alone, but share values in common with other agents (this can be
> > > extended to non-human agents of varying capabilities) there is a
> > > tendency toward progressively increasing the measure of subjective
> > > good.
> >
> > That's only a consequence of relatively symmetric game theory
> situations.
> > Subjective good is rising for us humans, because we're playing a
> symmetric
> > game and a certain level of cooperation is desireable for ourselves.
> > Subjective good isn't, or needs not, be rising for cows, which are
> playing a
> > completely assymetric game with us, we eat them, whether they like it or
> > not.
> >
> > > Appreciating and understanding the principles that describe this
> > > positive-sum growth would lead us to create frameworks to facilitate
> > > the process of (1) increasing awareness of shared values, and (2)
> > > increasing awareness of instrumental methods for achieving our goals.
> >
> > That would essentially come to game theory. A super AI probably would be
> > also assymetrically placed with respect to us. Our consent or
> cooperation is
> > probably not necessary or even helpful to an SAI.
> >
> Thanks David for highlighting the necessity of near-symmetry between
> agents. I was going to mention this later in the discussion since it
> is of critical importance to the question of whether we'll have time
> to develop a broad-based collective intelligence augmented with AI
> before we're made irrelevant by one or more narrowly focused SAIs.
> Of course, this list has dealt with this particular question many
> times already so I don't intend to bring it up for rehashing. My
> intent was to clarify what we mean when we talk about "morality" in
> general and the inconsistencies of conventional ethical thinking in
> particular.
> - Jef

I swear upon the alter of God, eternal hostility to every form of tyranny
over the mind of man. - Thomas Jefferson

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT