Re: Perspex Space

From: Marc Geddes (
Date: Sun Feb 06 2005 - 20:25:17 MST

 --- Michael Wilson <> wrote:

> I agree. Legitimate AI researchers most commonly
> fail because they
> pick a plausible looking simplistic mechanism and
> try to extend it
> to cover all of intelligence. Cranks pick a
> completely arbitrary
> simplistic mechanism and simply declare that it
> explains intelligence,
> or for bonus crackpot points life, the universe and
> everything.
> This is an example of the latter scenario.
> Representing programs as a 4D
> hypercube in 20 dimensional space, with a single
> interpretation rule to
> provide Turing completeness, does not magically make
> everything better
> nor unify symbolic and connectionist AI in any
> meaningful manner. It's
> somewhat reminiscent of Kanerva's 'sparse
> distributed memory' theory,
> but the plausibility, realism and coherent
> description replaced by
> amateurish ranting and wild philosophical
> speculation.
> * Michael Wilson

Hmm. As you know I'm one of those who enjoy 'wild
philosophical speculation'. You and Eliezer seem to
be very dismissive of philosophy in general.

But if you're relying strictly on empirical data to
get you to the Singularity, you're out of luck, since
even at the exponentially expanding rate of knowledge
acquisition in cognitive sciences, the general
consensus seems to be that be won't know all for
another 30-50 years. Don't want to wait that long do

Don't under estimate pure self reflection. Although
it's true that the past track record of the
philosophical approach throughout history is pretty
abysmal, I think ancient philosophers *could in
principle* have up with virtually all of modern
transhumanist philosophy (at least in terms of the
general principles), if they just thought hard enough.

The Multiverse, Big Bang, Linear Time, Quantum
uncertainty, Probabilistic reasoning, the Scientific
Method, Volition, Extrapolated Volition, Collective
Volition, Immortalist philosophy, Epicurean
philosophy, Altruism, Perfectionist Ethics, Democracy
(this they did manage!), Libertarianism, Futures
Markets, Natural Rights, Social Equality. I say upon
reflection that any ancient philosopher should have
been able to reason their way to *all* of these.

There are poor philosophers and there are good
philosophers ;) You can trust the good ones. Using
intense self-reflection alone, I'm confident I've
managed to 'punch my way' well past the current
empirical data (as regards general principles at
least). That's way I've so confidentially asserted
general principles to SL4 that seem crazy to others
here. (Such as equating objective morality with info
processing which moves the state of physical matter
optimally towards the Omega Point or stating that
there are 4 extra levels of intelligence missing from
Sing Inst's model).

P.S I wouldn't be so sure that Bayesian Reasoning is
the ultimate epistemology if I were you. I now have a
strong suspicion that it isn't. A couple of years ago
I was an Aristotelian until Rafal talked me out of it
on wta-talk and I became a Bayesian. Now my intense
self-reflection tells me that Bayes may not be the
last word in epistemology either. (You should have
realized that by looking at my 8-level intelligence
schematic. It would be nonsense if Bayes really was
the last word)

All this and I haven't even really bothered to 'hit
the books' yet. It's my philosophical intuition
versus Sing Inst's super-geniuses. I love it ;)


Find local movie times and trailers on Yahoo! Movies.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT