From: Michael Roy Ames (firstname.lastname@example.org)
Date: Tue Oct 19 2004 - 23:25:26 MDT
It is unclear precisely what problem you are trying to solve such that a
difficulty is caused by: "Tarski used Löb's Theorem to argue that no
consistent language can contain its own truth predicate." Perhaps you could
explain further the problem you are addressing?
Humans do not go 'up in flames' because we accept a reasonable level of
doubt in our decision making, and move on to the next question. The amount
of proof humans require before changing their minds varies, whereas the
Gödel machine requires complete proof based on its existing axioms. Does
the difficulty arise because you are planning on building a Gödel machine?
For a designed intelligent system we can choose an arbitrarily small level
of doubt in proofs about things and encounter no difficulties. If we
require an axiomatic proof about something then we will likely run into
difficulties, as you suggested in your post.
The Santa Claus Paradox is a really only paradox when applied to toy
systems. A system that interfaces with base reality can easily escape the
paradox, by checking outside itself. This is one way humans solve
Axioms and proofs are abstractions that are useful tools, but the results of
proving must be checked against reality to determine usefulness in reality.
An AI built to solve problems in the real world cannot be isolated from the
real world and be expected to make useful improvements to itself, even if
they are provably correct according to its existing set of axioms. Checking
new ideas in the real world is required of an Friendly AI, as it is required
in all scientific endeavors, and for the same reasons.
Is this *any* help? Or were you asking a different question?
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT