Re: Is complex emergence necessary for AGI?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Sep 20 2005 - 13:00:06 MDT


Michael Wilson wrote:
> The claims that any transhuman intelligence
> will renormalise to a rational basis, and that this is actually a better
> way to develop AGI regardless of Friendliness concerns, are weaker ones
> and again stand only as opinion in public at this time.

1) Say "Bayesian basis" not "rational basis". In many philosophical
systems, the word "rational" takes on meanings that you and I would
regard as specific moral content.

2) *Any* transhuman intelligence? That's a generalization over all
possible minds. Did you spend at least a week of solid thought trying
to design a counterexample?

I would guess that *most* other proposed architectures, if they worked
well enough to achieve serious optimization power, and turned that
optimization power upon themselves, and did not self-destruct, would
renormalize to expected utility maximizers or something closely akin.
Why? Because of all the coherence properties which are necessary and
often sufficient unto expected utility, same as with probability theory.

> 9. No-one associated with the SIAI denies that the brain is an example
> of a 'Complex system', or that emergence as a concept won't be useful
> for studying it.

That generalizes over everyone associated with SIAI, and you haven't
polled them all... It seems to me that "emergence" as a concept has
proven actively harmful. Whether there would be a residuum of
usefulness if all conceptually harmful aspects were eliminated...
probably. But I get along quite fine without ever attributing anything
to "emergence" or calling it an "emergent property", though from time to
time I must say "Y arises from X" or "Y emerges from X".

> 10. The issue of 'Friendliness content' is genuinely seperate from
> 'Friendliness structure' and hence 'strong Friendliness verification'.

CEV blends content and structure in some ways.

> Arguments about whether CV, or 'joy, choice and growth', or domain
> protection, or hedonism or Sysops or anything similar are a good idea
> are debates about Friendliness content. This is important, but it's
> well seperated from issues of structural verification and tractable
> implementation, and different in character (because it involves what
> we want instead of how to do it).

Problems with "domain protection" or "joyous growth" are structural, not
content-only. Domain protection attempts to use AI as a means to
implement world-changes for the sake of desired consequences, without
any attempt to have the FAI verify that the changes really do match up
with the desired consequences. "Joyous growth" tries to transfer over a
small chunk of moral complexity as direct programming, without setting
up a dynamic to transfer over all necessary humane complexity. These
are both distinctly structural issues.

> 12. Finally, my objection to claims about the value of Complexity theory
> were summed up by one critic's comment that "Wolfram's 'A New Kind of
> Science' would have been fine if it had been called 'Fun With Graph
> Paper'". The field has produced a vast amount of hype, a small amount
> of interesting maths and very few useful predictive theories in other
> domains. Its proponents are quick to claim that their ideas apply to
> virtually everything, when in practice they seem to have been actually
> useful in rather few cases. This opinion is based on coverage in the
> science press and would be easy to change via evidence, but to date
> no-one has responded to Eliezer's challenge with real examples of
> complexity theory doing something useful.

I asked for Complexity math applicable to *cognition* in humans or
elsewhere, which produces specific predictions better than
maximum-entropy distributions over the same phenomena.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT