Re: Please Re-read CAFAI

From: micah glasser (
Date: Wed Dec 14 2005 - 22:22:38 MST

I would question the assumption that a sufficiently intelligent AGI would
not have a 'self'. Any intelligence which is conscious and aware of that
consciousness certainly would have a self. I argue that sufficiently
powerful AGI must have self consciousness because such an intelligence, in
order to be sufficiently intelligent, must be able to add its own agency as
one aspect of the system that it is modeling (reality). Now if this is true,
i.e. that real AGI must be self-aware, then it would be highly dangerous to
have a bunch of super intelligent AGI running around treating people as a
means to an end alone. Note that it is ok to treat a person as means to an
end as long as that action can be justified as also being an end in itself.
Also I contend that if an AGI is self-aware that it must be programmed to
understand that it is not just an individual but part of a collective which
is human civilization and part of its goal system should be in service to
this collective. This is not just true for machines but for people as well.

On 12/14/05, Michael Vassar <> wrote:
> Yes Jef. I think we agree.
> I think that there are several problems here relating to being understood
> though.
> The first, real humans are not agents. Rather, they act *slightly*
> agent-like at least a little of the time. Agency is immensely powerful,
> and
> civilization is the result. However, a consequence of this is that almost
> all people have no model of agency, and fail to understand it at
> all. (are
> Role Playing Games effectively training in understanding agency?)
> The second problem is that among those few who understand agency we often
> have a "virtue and sanity" cargo cult. It is correct that certain
> beliefs,
> assertions, and preferences, such as the belief that the world is about to
> end, that one is uniquely almost uniquely capable of saving it, or that
> one no longer needs traditional morality are extremely good empirically
> validated predictors of harmful behavior. It is likewise true that in the
> Pacific Islands in the 1940s airstrips were good empirically validated
> predictors of food delivery. The refusal to formally adopt agency in
> place
> of evolved morality despite the knowledge that morality is evolved for a
> radically different environment because morality's rejection has predicted
> destructive behavior in the past is thus similar to building airstrips
> after
> the food deliveries have stopped, especially since agency, properly
> understood, encompasses fully that part of evolved moral behavior that
> (due
> to it's self-referential nature and connection to your goal system) you
> wish
> to retain. Similar things can be said about the refusal to adopt any
> particular far-from-appearent-consensus belief, especially beliefs
> regarding
> the low rationality, agency (ironically), or competence of accepted
> high-status authorities. Admittedly, the warmth of evolved morality,
> agreement with consensus, and high regard for the leaders/authorities of
> one's society are features of one's goal system, so I am counciling a sort
> of self-sacrifice, but more properly I am counciling that certain goals be
> temporarily deferred for the sake of the long term, for without doing so
> those goals will fail to be realized in the more distant future. Students
> of psychology should be at least as aware of the need to avoid hyperbolic
> discounting as students of history should be of insanity, especially given
> that the actual expected cost of insanity is so low (for most forms of
> insanity, only one relatively normal human life worth of utility plus a
> bit
> extra is at stake).
> Tennessee. The vast majority of FAI and UFAI designs are, at equilibrium,
> consequentialist rational agents, but NOT objectivitst. They are
> consequentialist because without consequentialism, goals can drift, but
> consequentialism is a black hole from which goals cannot, under ordinary
> circumstances, drift.
> They are NOT objectivists because except in rare cases they aren't even
> expected to have a "Self" to be self-interested in. Perfectly in
> violation
> of the categorical imperitive, they will see everything as a means to an
> end
> like all consequentialists do.

I swear upon the alter of God, eternal hostility to every form of tyranny
over the mind of man. - Thomas Jefferson

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT