Re: [sl4] An attempt at empathic AI

From: Johnicholas Hines (johnicholas.hines@gmail.com)
Date: Sun Feb 22 2009 - 14:20:26 MST


On Sun, Feb 22, 2009 at 3:42 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:
> --- On Sun, 2/22/09, Johnicholas Hines <johnicholas.hines@gmail.com> wrote:
>> Holographic AGI means you can't examine the structure of the AGI and
>> predict how it will behave. This is risky.
>
> Unfortunately it is a necessary property of any system that has greater algorithmic complexity than you do (beyond a small language-dependent constant, for those who want to nitpick about the math). You can't simulate (and therefore can't predict) what a system will do without knowing everything it knows.

I think you're thinking about the undecidability of practically all
predicates about general computer programs. However, it's entirely
possible to solve the halting problem for particular computer
programs. We just can't write an algorithm that does it for all
computer programs.

We can make strong arguments (maybe not proofs, probabilistic and/or
informal arguments) about how a modular system will behave, by
inspecting its structure. Not all systems have the necessary structure
to make arguments about them, of course.

We should strive to make the AI, or the AI seed, at least somewhat
analyzable, rather than holographic.

Johnicholas



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT