Crack-pots [was Perspex Space]

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Wed Feb 09 2005 - 22:59:52 MST


 --- Michael Wilson <mwdestinystar@yahoo.co.uk> wrote:

>
> You've been playing word games for a long time. Over
> time the
> words acquired particular sympathies and antipathies
> in your mind
> that cause them to fit together in certain
> structures. But those
> structures were not built by careful combination of
> objective
> axioms and empirical evidence and show no (apparent)
> concern for
> plausibility as a predictive, descriptive model of a
> real world
> intelligent agent. Your attempts to formalise them
> amount to
> rationalisation of the wild guess that managed to
> gather enough
> cognitive support for itself over repeated mental
> shufflings.

Gee thanks for this amazingly detailed insight into
the workings of my brain.

>
> This isn't a problem you can solve by connecting a
> few formal
> models with an overarching grid of fuzzy, ungrounded
> concepts
> that forms a cool-sounding pattern. The problem must
> be solved
> by large scale combination of those formal models
> into a vast,
> intricate, multilayered yet consistent pattern.
> There is no a
> priori requirement that the pattern look plausible
> or
> comprehensible to untrained perception, and indeed
> given the
> history we should expect this result. The solution
> /does/ have
> a kind beauty to it, but what legions of AI
> researchers missed
> is that you can't produce a rose without massive
> amounts of
> intertwined biochemical complexity (on top of the
> basic
> principles of molecular physics and natural
> selection).

O.K, but no one has 'the solution' yet.

 
> Probabilistic reasoning is a formal model created in
> a space
> very different from reality. Application of it
> requires a lot
> of human cognitive effort to dynamically create maps
> that are
> both descriptive of the target problem and compliant
> with the
> logical requirements of probability theory, followed
> by
> further effort to manipulate the model in useful
> ways. Clearly
> any constructive and complete theory of AI based on
> Bayesian
> principles must account for how this activity is
> accomplished.

What makes you so sure that even Bayesian reasoning is
up to the job of handling FAI? For sure, Bayesian
reasoning is powerful framework, but there are some
curious gaps and problems aren't there? For instance
'The problem of the reference class' (see Bostrom).
Or the inability to determine what the a priori
probabilities should be.

Had it ever ocurred to you that there is another as
yet undiscovered *even more powerful epistemology* as
far beyond Bayes as Bayes is beyond Aristotle? This
putative super-powerful epistemology would subsume
Bayes as a special case, whilst extending beyond the
Bayesian framework. And *this*, not Bayes, is the
REAL ultimate epistemology?

I bet you and Eli never thought of that.

*Marc glances and Wilson and Yudkowsky*

...only human *sigh*

> Epistemologists tend to forget that reality does not
> contain
> 'knowledge' any more than in contains 'flowers'; in
> fact in
> practice 'knowledge' is a much more difficult class
> of
> regularity to define.

Fascinating. I take it you're right.

>
> I will keep it in mind in case I ever need to
> license your
> Gibberish Matrix Technology (TM) for use in
> producing
> meaningless yet superficially impressive press
> statements.
> Meanwhile I have to wonder if your real goal is to
> cement
> yourself as /the/ definitive, canonical crackpot
> that will
> be remembered as such for the rest of human history.

What on Earth are you talking about? At the moment
SL4 is just a tiny backwater messageboard somewhere on
the net that no one really cares about. And Sing Inst
is only a small non-profit that few take seriously.

You've become a conspiracy theorist now have you? I'm
the biggest crack-pot in history and I've showed up at
SL4 specifically to cement my reputation. Yeah,
right.

Those in greenhouses shouldn't throw stones. As far
as I know, Singularity Institute are doing some good
work but nothing there as yet been recognized in
academia as a 'major advance' or anything. A bit rich
for you to run around dismissing everyone else as
'dabblers' (Eli's favourite derogatory term) or
'crack-pots' (Your favourite term).

>
> Seriously, you've used just about the most
> indirectly
> grounded and hence fuzzily defined concepts in the
> human
> cognitive repertoire, both in the table and the
> supporting
> text. It's a standard human tendency to assume that
> the
> most abstract concepts have the most descriptive
> power, but
> as as several posters have pointed out /engineering,
> and AI
> in particular, does not work like that/. Your output
> to
> date hardly constrains the design space of
> implementations
> at all; I suspect it would take very little
> additional
> rationalisation to characterise any sufficiently
> complex,
> superficially plausible AGI architecture as obeying
> these
> principles (should one so desire). Nor does it make
> any
> detailed predictions on the behavior of AGI or
> proto-AGI
> systems, and as such it is worthless as a
> constructive or
> predictive theory.

Um... let me point out that Sing Inst started with the
most fuzzily defined concept of the friggin
lot...Friendliness. What the heck is that anyway?
It's not even comprehensible to most people.

I, on the other hand, start with 16 words fully
describing a friendly intelligence, which while fuzzy,
are at least comprehensible:

http://www.sl4.org/wiki/TheWay

As to predictions, I made 10 falsifiable predictions
for fun in an earlier thread (where I listed 10
guesses):

http://www.sl4.org/archive/0501/10623.html

By all means lets invite the posthumans around to
examine them and give their verdict. They can then
contrast my ideas with you and Eli's. I look forward
to a bit of light entertainment.

>
> I suspect that your preconceptions will prevent you
> from
> extracting anything of value; you need a delicate
> combination of open mindedness, rigorous filtering
> for
> clue and creativity within logical constraints to
> elicit
> a constructive, predictive understanding of AGI from
> the
> AI literature.

Interesting. Well I imagine that you know what you're
talking here and that a lot of hard work and careful
thinking is required for sure, but I would have
thought that it's most important to have the correct
'top down strategy' first.

Put it this way: Suppose someone with no detailed
background knowledge of AGI theory hit on the correct
top down strategy leading to FAI. Then I suppose that
all that would be required would be say 6 months of
intense full-time study of the books and journals and
they would have absorbed enough information to
actually implement the strategy and usher in the
Singularity.

But the converse is not true. Someone with a detailed
knowledge of AGI theory but no good top down strategy
could sit there sifting through the books and journals
for another 30 years and it would do them no good.

 
> Entertainment value aside, I suppose we can both
> hope that
> it will serve as an object lesson in the value of
> philosophical
> intuition in the future, albeit from perfectly
> opposed
> motives.
>
> * Michael Wilson
>
> http://www.sl4.org/wiki/Starglider
>
>

Sure ;)

=====

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT