From: Eliezer S. Yudkowsky (email@example.com)
Date: Mon Sep 19 2005 - 18:09:15 MDT
Phil Goetz wrote:
> --- "Eliezer S. Yudkowsky" <firstname.lastname@example.org> wrote:
>>Sure, I was pleasantly surprised by Baum. Baum had at least one new
>>idea and said at least one sensible thing about it, a compliment I'd
>>pay also to Jeff Hawkins.
> What is the (at least) one new idea and the sensible thing about it,
> in the case of Hawkins?
Vernon Mountcastle looked at the cerebral cortex and realized that the
layered structure is strongly similar throughout, and said, "Maybe the
whole cortex is implementing roughly the same algorithm."
Jeff Hawkins says, "Maybe the universal cortical algorithm is temporal
hierarchical sequence prediction and conflict detection."
If I'd heard this theory back when I'd just studied functional
neuroanatomy but not evolutionary biology with math, I'd have dismissed
it out of hand as physics envy. Now that I have a quantitative grasp on
how incredibly slow evolution runs, it is far more plausible to me that
most or even all of the cerebral cortex is running one underlying
algorithm with various degrees of local tweaks - even if it's not at all
obvious that this is the case just from studying functional neuroanatomy.
There's a lot more to the human brain than cerebral cortex, but if
Redwood Neuroscience can prove Hawkins's notion or find some other
common algorithm at work across the cerebral cortex, it'll be the
largest contribution to neuroscience since Marr.
>>I bet that if you name three subtleties, I can describe
>>how Bayes plus expected utility plus Solomonoff (= AIXI) would do it
>>given infinite computing power.
> I found a paper by Marcus Hutter on AIXI, but it's 67 pages.
> Do you recommend anything shorter?
...not really. "A Gentle Introduction to the Universal Algorithmic
Agent AIXI" is probably as gentle as it gets.
>>I would guess that not many AI people can spot-read the difference
>>p(A -> B)
> Not me. What do the brackets mean?
A square with an arrow leading out to the right indicates
(counterfactual) causation. p(A -> B) reads "The probability that B
would have happened if we did A." The currently dominant position in
decision theory holds that expected utility should be evaluated by
A->B rather than B|A, since we aim to cause good outcomes rather than
provide ourselves with good news. However that's a much thornier
discussion than Hempel's Paradox and my own opinions are nonstandard, so
I'd as soon not go there.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT