Re: Building a friendly AI from a "just do what I tell you" AI

From: Hector Zenil (hzenilc@gmail.com)
Date: Wed Nov 21 2007 - 18:30:49 MST


On Nov 21, 2007 2:33 AM, <sl4.20.pris@spamgourmet.com> wrote:
> On Nov 18, 2007 11:01 PM, Hector Zenil - hzenilc@gmail.com wrote:
> > There is another potential issue concerning questions about models of
> > AI, both very likely to happen as I will explain below:
> >
> > 1. The problem of OAI deciding if a model (created by itself or not)
> > is FAI, is undecidable.
> > or
> > 2. it is irreducible.
> >
> > in both cases OAI cannot give a definite answer. Now replace OAI by
> > anything else but a hypercomputer.
>
> If you are right, this would also imply that we humans CAN'T prove
> that a specific AGI is friendly, correct?
>

That's right as long as the human mind does not turn out to be a
hyper-computer beyond the Turing limit, or any
non-catchable-by-science super-powerful organ.

These are all the logical scenarios I can figure out:

(1) Any type of AGI should be capable of universal computation,
otherwise it would be potentially less powerful than any digital
computer for most tasks. Moreover, it would be decidable so any
digital computer (even us as performing at least at the first Turing
degree) would be able to know "a priori" any answer asked to the AGI
just by shortcutting the computation of such a silly AGI (modulo
speed-ups).

Therefore:

(a) under the Church-Turing thesis (CTT) (which is widely accepted as
true by evidence in its favor and total lack of evidence against) it
turns out that no Turing machine can decide in general any non-trivial
property (non-trivial in the sense of Rice's theorem) as FAI or any
other consequence of computation in general (just as anybody can never
decide or predict if a Windows machine will crash or not, although
that does not prevent Microsoft to try to minimize it as possible).
and
(b) as you point out, under computationalism (by using a kind of
strong generalized CTT), the human mind cannot prove any non-trivial
property of AGI (under the minimum requirement of being capable of
universal computation but likewise that does not prevent us for trying
to build PC's).

Because (a) and (b) are widely accepted as true in Science because of
many reasons, even when esoteric approaches have been suggested (e.g.
Penrose claims about the human mind as a quantum hyper-science
hyper-computer) and consider that even the standard model of quantum
computing is Turing reducible, so even a quantum computer would fail
deciding any non-trivial property of AGI, the answer is yes, it is
very unlikely that any other AGI, OAI or XAI, including the human mind
can decide whether or not an AGI is FAI OAI or any XAI as long as the
"AI" par remains computationally speaking powerful enough.

And even if (b) turns out to be false by hypercomputationalism, AGI
would still be essentially undecidable if it is taken to the upper
feasible degree of computation and it would be bad news even for AGI
because it could imply indirectly that perhaps intelligence is an
emergent property of a hyper-computer coded in the mind. Case that of
course I strongly disbelieve.

That does not prevent us to talk long about interesting approaches of
AGI and FAI, specially how to reach it. But it is a kind of weird that
most discussions does not take into consideration these important
facts and definitely constraints that are neglected about
computational systems in general. Once being aware of all that perhaps
we could take the discussion to another level.

Hector

-- 
Hector Zenil-Chavez
Université de Lille I (Laboratoire d'Informatique Fondamentale)
Université Pantheon-Sorbonne -Paris 1- (IHPST)
--------------------------------
zenil.mathrix.org
animaexmachina.com
---------------------------------


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT