Re: [sl4] An attempt at empathic AI

From: Johnicholas Hines (johnicholas.hines@gmail.com)
Date: Mon Feb 23 2009 - 07:47:21 MST


On Mon, Feb 23, 2009 at 8:56 AM, Matt Mahoney <matmahoney@yahoo.com> wrote:
> It is possible for an algorithm with high complexity (i.e. there is no simple description of it) to do something simple, meaning there is another small program that does the same thing. My claim is that an intelligent algorithm can't be reduced in such a way. The simplest equivalent program is still complex.

I completely agree with you regarding the statements "An intelligent
algorithm can't be equivalent to a simple program." and "The simplest
equivalent program is still complex."

However, you seem to be thinking about characterizing or predicting
ALL of the system's behavior. You can prove (or argue informally)
properties that do not pin the behavior down exactly. For example, in
my pseudocode, I argued that the program would not print "Hello World"
but instead one of "True", "False", or nothing.

Here is another example:

A compiler would not be valuable if we could easily predict everything
about its output.
Conversely, it also would not be valuable if we could not argue
(informally) from its structure, that the output has a property of
"correctness".

Johnicholas

P.S. I emphasize informal argument rather than proof because I think
it's an older and more important concept. Formal methods might seem to
have a qualitative advantage of "perfection" but because of possible
bugs in the verifier, cosmic rays, and so on, formal methods never can
give guarantees that there are no errors, only very low probabilities.
Also, there are risks in the (inevitable) steps from the real world to
the assumptions, and from the conclusion back to the real world.
Emphasizing the perfection of the intermediate steps can lead to false
confidence in the entire thought process.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT