From: Michael Anissimov (michaelanissimov@gmail.com)
Date: Mon Aug 14 2006 - 00:13:41 MDT
Tennessee,
> I beg to request more clarification. Eliezer promotes (for example)
> Bayes as a possibly perfect way of reasoning and inferences. If this is
> so, does this not imply that all questions have a correct,
> non-subjective response? If the correctness of Bayesian reasoning is
> non-subjective, does this not perhaps mean that any perfectly reasoning
> AGI can in fact reach one conclusion?
On 8/13/06, Tennessee Leeuwenburg <tennessee@tennessee.id.au> wrote:
> I beg to request more clarification. Eliezer promotes (for example)
> Bayes as a possibly perfect way of reasoning and inferences. If this is
> so, does this not imply that all questions have a correct,
> non-subjective response? If the correctness of Bayesian reasoning is
> non-subjective, does this not perhaps mean that any perfectly reasoning
> AGI can in fact reach one conclusion?
Bayesian reasoning tells you how to update your confidence levels in
various beliefs, given evidence. Eventually, it allows you to give
true answers to questions of fact. But it does not give answers to
questions that are inherently observer-biased or based on values. For
example, when someone says "chocolate ice cream is good", this means
that their taste buds relay information perceptions of ice cream
molecules to the brain, which returns a pleasurable feeling. When
they utter the phrase "chocolate ice cream is good", they are actually
misinterpreting a subjective fact about their interpretations as an
objective quality of the ice cream. This is called the Mind Projection
Fallacy.
If you speculate and say, "AGIs will do so-and-so", you are claiming
that you are capable of visualizing the behavioral outputs of a truly
massive space of minds. We can't do this, any more than an amateur
can predict the next move of a chess grandmaster. (In fact, it's far
more difficult.) We can only say, "an AGI's actions are very likely
to fall somewhere in category X, if it provably maintains an abstract
invariant that outputs actions in category X". This is how we can
make statements about the long-term behavior of mathematical
abstractions like AIXI.
> Is it reasonable to postulate a truly superintelligent being which has
> such an actually useless goal? I say not.
Useless to you, important to it. Humans like having sex and achieving
status because our evolutionary circumstances produced a selection
pressure in favor of those goals. There is no such thing as an
inherently meaningful goal, only goals that minds with a particular
structure happen to like. To state otherwise is to fall prey to the
Mind Projection Fallacy.
> Indeed. Is this a good thing? This list by its nature is dedicated to
> the moral exploration of superintelligence. Had we no opinion on what
> would or would not be "good" in a superintelligent framework, we could
> say nothing of merit and would have to simply accept whatever comes.
The inherent flexibility of arbitrarily programmed mindspace is not
good or bad - it is a fact. Further explorations of which minds
humans would experience as good and which minds we would experience as
bad must first acknowledge this.
To do constructive work in an intellectual field, you must first
consult the primary literature. For Friendly AI, this begins with
reading CFAI in its entirety.
http://en.wikipedia.org/wiki/Friendly_AI has further links, which
include works by Hibbard and Voss. Ben Goertzel has also written a
few papers. For a short page that begins to pick at the problem, see
http://www.intelligence.org/intro/friendly.html.
> Instead, however we are considering the limitations and implications of
> AGI both in terms of self-preservation and more widely in terms of other
> moral qualities. What might an infinitely plastic mind, having achieved
> all goals related to the accumulation of knowledge, adopt as a goal?
You're anthropomorphizing. Humans need to keep making up and persuing
goals to justify their existence, but this does not apply to all
minds. For an AI programmed to acquire knowledge... it would either
be content to keep going forever, because there are always more bits
of information to learn, or it would see all further actions as having
zero differential utility, and stop completely, or achieve some
equilibrium output.
> I might say though, if an AGI is perfectly alien, then it is also
> perfectly incomprehensible. If it is perfectly incomprehensible, then
> everything we discuss here is complete rubbish.
AGI in general is too large of a space to make many statements about,
except in very abstract terms. We can discuss how an AGI programmed
in a particular way would act, especially as it begins to reach around
and modify its own source code. These are questions being asked in a
rigorous, technical manner by small groups of mathematicians and
theoretical computer scientists worldwide. Coming up with real
answers is a huge challenge.
-- Michael Anissimov Lifeboat Foundation http://lifeboat.com http://acceleratingfuture.com/michael/blog
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT