Re: Error detecting, error correcting, error predicting

From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Wed May 07 2008 - 03:25:38 MDT


> I don't buy it. You claim we can talk to a godlike AI through a chain
> of agents of gradually increasing intelligence. So instead of A -> C
> we have A -> B -> C and presumably error(A,B) + error(B,C) <
> error(A,C). Why would this be true? How would a dog help us talk to
> an insect?

The point is not to talk to the godlike being through a chain - we can
talk to the AI directly.

The point is to have some conception as to whether we can trust it's
truthfulness - or even what truthfulness means to such a being. Dogs
and insects are not the way to go for this analogy - any advanced AI
should have, as we do, the ability to generalise, to understand
concepts, and to use language. It's more like someone in the rural
backwaters of some country asking his son, who emigrated to the city
"can we trust that president there?", and having the message relayed
until it reaches somone who knows.

And that analogy points out the flaws in the system - your point about
error(A,B) + error(B,C) vis a vis error(A,C). I don't think the chain
will be lost to chinese whispers, mainly because truthfulness can be
characterised more rigidly than "friendliness" and similar ideas
(though we'll have to be very carefull about sins of omission, or
slanting the evidence), and because each AI in the chain will be
itself truthfull.

We can even check by enquiring of each level what "thruthfulness"
means to them. If the definition starts to become disturbing or
incomprehensible, then we can abort.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT