Re: [sl4] simplistic models of capability growth

From: Johnicholas Hines (johnicholas.hines@gmail.com)
Date: Mon Feb 09 2009 - 16:23:03 MST


On Mon, Feb 9, 2009 at 1:26 PM, Petter Wingren-Rasmussen
<petterwr@gmail.com> wrote:
> Defining things is a very good start :)
[...]
>
> If I understand this correctly it means that any non-modifiable part
> of an AI will sooner or later be the limiting factor for its growth.
> This leads to a very large risk in the long run of a competing system
> without a non-modifiable part to become more efficient/powerful than
> the one with non-modifiable parts.
> For me the conclusion is clear:
> Any AI with a hard-coded law for "being nice","prevent murders" or any
> other non-modifiable procedure will with great probability be overrun
> by a system without those hardwritten rules.
>

Your summary above is indeed the rhetorical story that I was
attempting to capture with that differential equation. IF that
differential equation approximates the growth of an AI with a
hard-coded law such as you describe THEN the conclusion you provide
follows. Thanks for reading so closely!

However, I'm not convinced that _any_ of these models model the
reality of AI well at all. My limited real-world experience with
general learning machines (e.g. genetic algorithms, neural nets,
theorem provers) is that they tend to gain interesting capabilities
rapidly (well inside 24 hours of a single desktop machine), and then
experience a "loss of steam". I'm not sure how best to model that
pattern using differential equations.

Johnicholas



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT