From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Mon Sep 20 2004 - 08:57:15 MDT
Jef Allbright wrote:
> A rational approach, as this term is commonly construed, requires
> effectively complete data, and sufficient time to process it... at
> any lower level of context, an effective intentional approach to the
> challenges of life involves a mix of rational analysis and heuristic
> "going with the flow".
That's what I used to think; this I why I used to be dismissive of
abstract reasoning theory. However your analysis only applies to
methods of reasoning known to be normative for complete information
and unbounded computing power; even AIXI can do better than that
(it's normative for infinite computing power but limited information,
though only for restricted classes of utility function). I am now
talking about approaching /the/ normative reasoning strategy for
arbitrarily (but not trivially) bounded T, L and information. I think
it's highly likely that transhumans would discover and prove the full
theory and once we do I suspect it will be what people mean when they
refer to 'general intelligence'. In the mean time, we're starting to
see through the cogsci haze to some promising looking close
approximations.
> The economics of the situation do not allow a flat normative
> solution over the varied and changing landscape in which we
> find ourselves now.
'Flat' wouldn't be the word I'd use, but there is a normative solution.
> The best we can do is apply a form of "bounded rationality" where we
> apply current knowledge and strategies, however incomplete but growing,
> to an increasingly diverse environment.
Everything that's worthwhile in AI will be implicit in the full
normative theory. Whatever strategy is best suited to a particular
problem, a normative reasoner will converge as soon as the
available feedback information can distinguish it from the other
possible strategies. All of the complicated cognitive complexity we've
been talking about putting into an AGI is still necessary both to
reduce the bootstrap time and more importantly so that the programmers
have a clue what's going on (particularly, so we know how to define
goals).
> Rather than try to extrapolate from what we know now and then normalize,
> we must study the ever-changing rules of the game and then optimize.
Friendly AI appears to require that we do both; unfortunately UFAI does
not, though doing so greatly increases the chances of building any sort
of AGI.
> That said, the far future environment and challenges will have little
> relationship with current human values and concerns.
For the superficial stuff that constitutes public issues and the
intermediate complexity of extant moral systems and philosophical debate
I agree. However I am not so sure about breaking our reference class for
'volitional sentient' and fundamentals. While I am prepared to trust a
competent implementation of CV to select between universes I can't
comprehend, I'm concerned that the 'volition' part might evaporate half-way
through the extrapolation, leaving a result that I no longer care about.
* Michael Wilson
http://www.sl4.org/bin/wiki.pl?Starglider
___________________________________________________________ALL-NEW Yahoo! Messenger - all new features - even more fun! http://uk.messenger.yahoo.com
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:46 MST