Re: AI Goals [WAS Re: The Singularity vs. the Wall]

From: Jef Allbright (
Date: Tue Apr 25 2006 - 16:54:23 MDT

On 4/25/06, Phillip Huggan <> wrote:
> If human minds templated on a brain can select MWI universe threads, then we
> do ontologically have a limited degree of Free Will.

I am not aware that any mind can or could "select MWI universe threads."

> "Describing the degree
> of uncertainty" is another phrase for *potential* Free Will, assuming the
> uncertainty stems from genuine physical limits and not just our incomplete
> observational epistimology.

The very useful illusion of free-will is a result of our uncertainty
over our own choices, but I was referring to something else, which was
our capability in the context of science or engineering to objectively
quantify the uncertainty of a hypothetical measurement.

I don't know why you are trying to differentiate between physical
measurment uncertaintly and "incomplete observational epistemology"
which seems to mean "incomplete understanding of what is being
observed." It seems to me that a single uncertaintly value would
suffice, encompassing all factors.

> Why can't some of the goals be non-evolvable? No one should ever engineer a
> particle accelerator large enough (good fraction of a galaxy I think) to
> destablize the Space-Time vacuum. That goal doesn't need to evolve.

For all practical purposes, a particular goal in itself needn't
necessarily change during the lifetime of an agent. Keep in mind,
however, that for interesting goals, the context is likely to change
as the agent and its environment develop. A goal entails controlling
some measurement relative to something else, and while the goal can
remain constant, it's not likely that the world will. Note also that
goals don't exist in isolation, any interesting goal will be tangled
with mutiple other considerations.

As I mentioned earlier, there is a fundamental problem with an agent
at time t setting goals to be accomplished by actions taken over a
significant future duration. Unless the context of the task is very
well defined in advance, then there is significant risk of goal
becoming inaccurate or even irrelevant. Better to execute based on
values-based principles that will send you in the desired direction
than to target an end point that may not be there, or may not be what
you thought it was, when you arrive.

> I see
> AGI as progressively laying fixed boundary conditions that only evolve as
> our understanding of WMD engineering increases. No need to ever allow
> people to make the mega Particle Accelerator. We might want to keep black
> hole experiments/science out of bounds for now, but later on open it up when
> we are more technically robust.

Technology is an inherently double-edged sword, and knowledge tends to
ratchet only forward. I see no evidence that prohibitions work over
the long term, but I do see that increasing awareness leads to
increasingly effective decision-making, so I recommend focusing on
methods of increasing awareness over increasing scope. Note also that
increasing awareness tends to re-frame problems such that may become
non-problems or may become ammenable to unforseen solutions. Trying
to halt progress is not the answer.

> Stagnation isn't that bad of an endgame if
> it is a happy plateau.

Happiness in humans is not a static process or condition. It is the
systems's way of motivating progress toward goals. We would do well
to choose our path forward wisely, and we may indeed find that
"there's pleny of room at the bottom", but I think accepting
stagnation would be unacceptable to humanity.

> Jef Allbright <> wrote:
> <SNIP>
> Similarly with "free-will". Certainly we can all speak of free-will
> within the context of common human social interactions and it makes
> sense. In fact our legal and judicial system, as well as
> moral/ethical beliefs and behavior depend on it. However, just as
> with the self, the closer one looks, the more it is apparent that
> there is no ultimate free-will, and that all interactions can be
> described precisely (including describing the degree of uncertaintly)
> within a deterministic framework of explanation. In fact, if our
> behavior were not deterministic, we would lose the "free-will"--the
> ability to choose--that we do have.
> <SNIP>
> Goals are always about controlling some (complex) parameter relative
> to something else. Given a well-specified context, then we can
> precisely define goals. Goals are necessary for an AGI, but I believe
> they must evolve. Within an evolving model of an evolving environment,
> to be invariant is to die.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT