From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Feb 21 2005 - 16:27:10 MST
(Resuming this after moving to the Bay Area.)
Robin Hanson wrote:
> On 1/16/2005, Eliezer S. Yudkowsky wrote:
>
>> The consequences of my accepting the modesty argument would be
>> extremely large, because if I came to believe it as a fact, a large
>> number of extremely important beliefs would change.
>
> I'm glad the topic is important to you. It is important to me too -
> it is a central topic of my research, and I'm seriously considering
> writing a book on it.
I hope that my challenges help to solidify your thinking. (And that
your thinking solidifies in a correct direction, rather than, say,
polarizing with spins opposite to incoming challenges; a major past bias
of mine, one that still scares me, and a reason I still try to avoid
heated arguments about any question that might prove to be actually
important.)
>> The only person I know of who seems to really accept the modesty
>> argument is Hal Finney. You and I, having opinions that are not
>> the academic consensus, and keeping them despite all arguments we
>> "take into account", are not on that list. The modesty argument is
>> not a consensus opinion in science as to how science should be
>> conducted, but you think you know better.
>
> You might be right, but I'm not sure.
Modest of you, but how do your actions differ because of your notsurety?
A doubt that does not participate in steering my life is no doubt at
all. I think that some - not a majority but some - educated Christians
would say that there's a chance that the atheists might be right; but,
having dutifully bowed to modesty, how does their life change? When
people instinctively judge that, in their social environment, to believe
wholeheartedly in clear skies will make them appear overconfident, they
are instinctively careful to be seen publicly doubting the sun. When
people anticipate rain, they take umbrellas.
And I know this, and try to correct for that bug in my thinking. So if
I accept the modesty argument, it has a *very large* effect on what I
anticipate and how I behave.
I would have to somehow twist my brain into anticipating that AI was
fifty years off, that cryonics was a crackpot endeavor, that molecular
machines were a strictly science-fictional nightmare, and that clinical
psychology was medicine. Where do I draw the line?
Hal Finney is the only person I know of who would advocate that I not
draw a line - and I'm not sure that Finney realizes the logical
implication of his modesty. But also this: Finney is undeniably
correct that if most people followed his advice and accepted all
contemporary science without daring to draw lines, they would be better
off. Better off, that is, in the matter of their maps conforming to the
territory. Their personal development would be impeded, and they could
not build further upon the edifice of science; they would have accepted
an Authority. Yet the mere morals of science matter not to Bayesianity.
Their score went up, and there Judgment ends.
I don't think my own score would go up if I thought that AI were fifty
years off. I am not most people, even taking into account that most
people don't believe themselves to be most people.
I cannot accept the modesty argument lightly, because I know better than
to bow in that direction and then keep on with what I was doing. If I
sound immodest, it is because my behavior is immodest however it is
disguised, and I have learned better than to disguise it.
> There is often a difference between the opinions of the typical
> person, the typical academic, and the typical academic who publishes
> near the topic in question (or could do so). And there is often a
> difference between what they say in publications and what they say in
> private. It is what those who publish near say in private that I
> most try to be near.
I don't accuse you of committing the sin yourself; but if that is how
you advise others, they will pick and choose their sources.
> And just because I write favorably regarding a position doesn't mean
> I assign it over 50% probability.
What's special about the 50% threshold? If a calibrated confidence
would be 1% and the Bayesian wannabe says 10%, is he not damned just as
much as for 6 and 60?
50% has a special status in discourse, but it stems from a naive
psychophysics of rationality, not probability theory. 50% is the
special number that lets you say aloud: "I assign a probability of less
than 50%, therefore I 'don't believe in it' - I doubt it just like you,
and pursue it only for the sake of investigating the issue, as a
scientist must..." Let us leave aside that a Bayesian may be damned for
saying 10% if the number be too high. Do you *anticipate* less than 50%
those positions you write about? How little must you *anticipate*
something before you no longer really deep-down expect returns on
pursuing it? I've seen people claim to assign 1% probabilities to
solutions they pursue; and I think they have no emotional grasp on what
the phrase "1% probability" means. They say "1% probability" and
multiply their emotional anticipation of reward by a factor of, oh, I
don't know, if I had to pull a number out of my ass, I'd say no better
than 80% in terms of neurotransmitter intensity or some other measure of
psychophysics or behavior.
If the people around you (or the people you define as "near to the
subject") seem to think 0.01%, and you want to say aloud "ten percent"
and care enough emotionally to write papers, then you still violate the
tenet of the modesty argument. There's nothing special about the 50%
threshold.
>> The modesty argument runs as follows: Maybe 20% of the
>> as-yet-unproven answers, of the researchers of the caliber selected
>> for _Edge_, will turn out to be correct. Or maybe 40%, or 5%;
>> whatever. ... Yet most of the _Edge_ respondents would indicate
>> higher confidence ... The modesty argument is that if all
>> researchers in the _Edge_ group changed their stated confidence
>> (and actual decision-governing anticipation) to 20%, they would
>> achieve better aggregate calibration. ... The (hypothetical) 20%
>> aggregate score is not due to the _Edge_ scientists randomly
>> selecting one of five possible answers on a multiple-choice test.
>> The _Edge_ scientists who selected a correct answer almost
>> certainly had more evidence than they needed just to find that
>> hypothesis in theoryspace; their confidence is justified. The
>> _Edge_ scientists who answered correctly must have done so from
>> thinking with at least some rationality-structure, applying good
>> reasons for confidence. And in turn, those _Edge_ scientists who
>> answered incorrectly must have mistaken what are "good reasons" for
>> belief, or deliberately departed the Way. ... But from the internal
>> perspective of any _Edge_ scientist who gave a correct answer, they
>> must have more Bayesian evidence than just that. Otherwise it would
>> have been impossible for them to pluck that correct hypothesis out
>> of theoryspace.
>
> I'm just not following your logic here. Sure, if some people made
> more cognitive errors than other people, those who made fewer errors
> are more likely to be right. So conversely, those who are right made
> fewer errors. The argument they made turns out to have been a good
> argument with fewer errors. And because they pick the right answer,
> they better reasons available to them. But each person does not know
> if he made more or fewer errors, so none of this helps him.
Let me start by asking this admittedly easier question: am I as an
outside observer, helpless to distinguish the _Edge_ responses into two
groups, A and B, such that A contains a significantly larger proportion
of correct answers than B?
The _Edge_ Question was this:
"What do you believe is true even though you cannot prove it?"
David Buss, a preeminent evolutionary psychologist, answers:
>>>> True love.
>>>>
>>>> I've spent two decades of my professional life studying human
>>>> mating. In that time, I've documented phenomena ranging from
>>>> what men and women desire in a mate to the most diabolical
>>>> forms of sexual treachery. I've discovered the astonishingly
>>>> creative ways in which men and women deceive and manipulate
>>>> each other. I've studied mate poachers, obsessed stalkers,
>>>> sexual predators, and spouse murderers. But throughout this
>>>> exploration of the dark dimensions of human mating, I've
>>>> remained unwavering in my belief in true love.
>>>>
>>>> While love is common, true love is rare, and I believe that few
>>>> people are fortunate enough to experience it. The roads of
>>>> regular love are well traveled and their markers are well
>>>> understood by many—the mesmerizing attraction, the ideational
>>>> obsession, the sexual afterglow, profound self-sacrifice, and
>>>> the desire to combine DNA. But true love takes its own course
>>>> through uncharted territory. It knows no fences, has no
>>>> barriers or boundaries. It's difficult to define, eludes modern
>>>> measurement, and seems scientifically wooly. But I know true
>>>> love exists. I just can't prove it.
Am I limited to assigning to David Buss's response the same confidence
that I assign to each and every other response to the _Edge_ questionnaire?
Suppose that instead Buss had said:
Subjunctive Buss: "I would say 'true love'. By 'true love', I'm trying
to give a name - the name most people would apply, I think - to a
handful of relationships I've seen over the years; relationships that
are rock-solid, where both spouses are still 'in love', not just loving
but in love, after ten years or twenty years. True love is very rare, I
haven't seen enough cases to launch a controlled study of true love, so
I can't tell you from observation what the preconditions are for true
love - although, anecdotally, the two are usually very compatible in
most ways that I've learned to measure compatibility. I do have an
evolutionary hypothesis - more of a 'just-so story' at this point, but
still something that suggests possible experimental tests - that there
is a point in the Darwinian game where it is a winning strategy to pick
a mate and stick with that choice; where the probability of doing better
by straying or betraying, times the probable reward, is just
significantly less than the reproductive reward of a rock-solid
relationship. And if so, people might have a 'true love' mode, and it
might also be more common in hunter-gatherer societies than modern
societies. I don't think that you'll see many cases of 'true love' in a
society that advertises promiscuous women on every billboard, or
publishes stories about bachelor celebrity husbands - it makes the
reward of straying look too high. And the opposing hypothesis, of which
I am well aware, is that 'true love' is something that people imagine,
like dragons and unicorns, and not something that exists in the real
world. I'm aware of the seductiveness of this theory, and I try not to
be seduced. But there are relationships I've seen that seem to call for
that explanation. When I consider the tremendous, demonstrable benefits
of a rock-solid relationship, it isn't out of the question
evolutionarily - even if it sounds too good to be true."
But instead, Buss actually did say:
"I've spent two decades of my professional life studying human mating.
In that time, I've documented phenomena ranging from what men and women
desire in a mate to the most diabolical forms of sexual treachery. I've
discovered the astonishingly creative ways in which men and women
deceive and manipulate each other. I've studied mate poachers, obsessed
stalkers, sexual predators, and spouse murderers. But throughout this
exploration of the dark dimensions of human mating, I've remained
unwavering in my belief in true love."
In the first case, the subjunctive Buss is interpreting the _Edge_
question to mean:
"What do you believe even though you have not yet produced solid
experimental results that satisfy your fellow scientists?"
Maybe I accuse Buss too harshly and if I interrogated Buss more closely,
something like the above justification would emerge. But Buss seems to
have actually interpreted the _Edge_ question to mean:
"What do you believe even though there is strong evidence against it?"
If John Brockman had asked the first version of the question explicitly,
I think he would have gotten a substantially better population of
correct answers among those who responded. If John Brockman had asked
the second version of the question explicitly, many would have
indignantly refused to answer, and among those who cheerfully did so -
why, there might not be a single correct answer in the bunch, except by
sheer coincidence on questions with a small solution space.
I think that if you applied the simple test of looking at which _Edge_
respondents put disclaimers on the "wrong question" they were given, as
many respondents did, then even that test would separate into
populations A and B with significantly different proportions of correct
answers.
Unless the _Edge_ responders became aware of the test, in which case it
would become much less effective. And this gets us into your much more
complicated question, not how we could distinguish among _Edge_
respondents, but how they could distinguish themselves. Your phrasing was:
> I'm just not following your logic here. Sure, if some people made
> more cognitive errors than other people, those who made fewer errors
> are more likely to be right. So conversely, those who are right made
> fewer errors. The argument they made turns out to have been a good
> argument with fewer errors. And because they pick the right answer,
> they better reasons available to them. But each person does not know
> if he made more or fewer errors, so none of this helps him.
Errors of rationality are not independent random variables! They are
not randomly distributed among reasoners. Any human being possesses
some ability to check their own cognition for errors. (And every human
being is therefore impressed with their own rationality, since they spot
the errors they know how to spot, and miss the errors they don't know
how to spot; from their perspective, they sure are catching a lot of
errors!) But, leaving that aside, the ability to check oneself for
errors is part of the standard operating procedure of the brain,
contributing to human intelligence. You say, "But each person does not
know if he made more or fewer errors". No; people will try to guess,
and do well or poorly, according to their mastery of the art. But
errors of cognition are not concealed random variables. People possess
information, both direct and indirect, about errors they may have made;
they may not know how to interpret the clues, but they have the clues.
Now, I do apologize to Buss if I accuse him falsely, but I look at:
"True love takes its own course through uncharted territory. It knows no
fences, has no barriers or boundaries. It's difficult to define, eludes
modern measurement, and seems scientifically woolly. But I know true
love exists. I just can't prove it."
You, or I, can hear the sound of Buss's brain clicking off, the metallic
clunk as the train ratchets off the rationality track. Buss didn't hear
it, but you and I do.
Stephen Gould's concept of a "separate magisterium" comes in handy here.
There are no separate magisteriums in reality, which is a single
unified whole, and all distinctions a human conceit. But people do
maintain separate magisteriums in their thinking. There is a mundane
magisterium, where people reason by evidence and Occam's Razor, or by a
fragile intuitive grasp on the statistics of mundane things. And there
is another magisterium, which may be called the sacred magisterium, the
magical realm, the spiritual, the unmundane; by any name it is the realm
of thought (not reality) where woolly reasoning is believed by the
reasoner to be permitted and acceptable, for whatever reason. The
sacred magisterium may be as lofty and unattainable as Heaven, or as
easily purchased as a lottery ticket. But the rules of thinking change
there; that is what defines the sacred magisterium as it exists in the
human imagination.
Buss is, by all that I have heard of him, an excellent evolutionary
scientist. He fails on this question of 'true love' because it occupies
a separate magisterium to him.
A master of traditional rationality (and how few people have mastered
even that, let alone Bayescraft?) does not permit any sacred
magisterium. For by the ancient traditions of rationality, woolly
thinking is a sin. By the morals of traditional rationality, there is a
precious and sacred thing, reason and evidence, to which the warm
comfort of woolly thinking must be nobly sacrificed, whatever the
emotional cost. Buss was not willing to make the sacrifice. So he failed.
I am one who would create AI, a wannabe mechanic of minds. Intelligence
is no longer a sacred mystery unto me. For me the difference between
Bayesian probability theory and woolly reasoning is as clear as the
difference between an internal combustion engine and a heap of jello. I
use rationality instead of wishful thinking because to me these are not
different labels but different engines of cognition; I can *see* why the
first cognitive dynamic works and the second one doesn't. And, seeing
this, I know also that there is no magisterium where this is not so. To
me there is no question of being 'allowed' to escape the naughty
constraints of reason into some sacred magisterium where I am finally
allowed to relax with some comfortable nonsense. I would just be
swapping in an engine that didn't work.
As a practitioner of the Way, I also don't hold with the traditional
notion of rationality being a great, noble, and difficult sacrifice.
That is not a smoothly running engine. When you know from which quarter
the winds of evidence blow, when you realize which direction of incoming
evidence you are having to resist, then switch beliefs. No fuss, no
drama, no agonizing before and no self-congratulation afterward, just
shut up and do it.
People say - they repeat the satisfying anecdote - that scientists and
geniuses often do poorly in everyday life, or on questions outside their
field. That does not satisfy me; if I am to succeed on purpose, instead
of by accident, I should be able to succeed reliably. A well-designed
engine just works. Buss failed embarassingly, in a way that would be
visible even to a traditional rationalist. Buss *knew* better, on some
level, and still he made the mistake; he said himself that true love
seemed scientifically woolly. Buss had already accomplished the hard
part of the problem, finding signs to distinguish truth from falsehood;
he just ignored his own intelligence. If scientists fail on questions
outside their field, perhaps it is because they apply a different method
of thinking. Why would scientists apply a different method of thinking
outside their art? Because they believe it is permitted them; because
in our society they can get away with it; because no one holds them to a
higher standard. They have not a mind-mechanic's art to look at their
thoughts and see engines of success and engines of failure. They do not
hear the clunk as the train of reason derails.
To ask how the _Edge_ correspondents could distinguish *themselves*,
catch their *own* errors, is a more complicated question than to ask how
I can distinguish among them. When I audit someone from the outside, I
catch mistakes that they don't know how to conceal. Suppose Buss-1 were
a sufficiently strong traditional rationalist to know how mistaken
Buss's actual answer sounded, but Buss-1 still possessed a strong
emotional attachment to 'true love'. Buss-1 might fixate on the same
answer because of the same cognitive forces, while producing a
rationalization - "reasons" - like Buss-1's statement above. And then I
as an auditor might be fooled, placing Buss-1's answer in the A
population, even though Buss-1 gave just the same answer as Buss. It is
much harder to learn not to rationalize than to learn what sounds like a
rational answer to your fellows. The second task is ancestral, the
first task is not.
But it's not a coincidence that Buss, who made the mistake, did not know
to conceal that mistake from an outside auditor. Errors of rationality
are not independent random variables. I would expect those _Edge_
scientists who achieved a correct answer to have done so through
rational reasoning. One cause for their reasoning being rational was
that they managed to make fewer errors or overcome the errors they did
make. The cause for their making fewer errors was not mostly randomness
but mostly skill. The ratio of possible errors to possible successes is
very high, and it only takes one solid error to unlink a chain of
reasoning. When people get things right, on questions with large
solution spaces, it is not by coincidence.
I would expect those _Edge_ scientists who answered correctly once to be
significantly more likely to answer correctly again, relative to the
population who answered incorrectly.
I would expect this to be caused, in large part, by the correct
answerers having more veridical self-estimates of when they have
committed errors.
And yes, those who answer incorrectly are free to reply smugly, "But now
I'm going to make a very high estimate of my own veridicality in
detecting errors! How about that? Huh? Huh?" But since they can't
get the right answer on scientific questions, why should we expect them
to get the right answer on reflective self-estimates? The people who
answer incorrectly, well, what can you do with them? I'd rather
concentrate my efforts on the people who already have some skill at
traditional rationality. Those are the people whom I am likely to be
able to teach better and more detailed skills of self-estimation. And I
want their self-estimates to be as accurate as possible - to develop
further that skill they have already used. The modesty argument doesn't
strike me as helpful in doing so. It just lumps them in with people who
are blatantly less skillful, and refuses to cluster further or provide
detailed predictions.
In the end, my verdict is still: "When we dream, we do not know we
dream; but when we wake, we know we are awake."
Fairness in argument is a foundational motive force upholding
traditional rationality, but the Way has only one single rule, to cut
through to the correct answer. If I permit myself to think about
whether my arguments sound fair according to human political instincts,
even for a fraction of a second, I am no longer following the Way. I
may be following the tradition of traditional rationality, which has
wreaked much good in its time; but I am no longer following the Way of
trying my absolute best to perfect my individual map to the territory.
I can only improve my individual map of my individual errors by
rendering it a detailed description of myself, not by turning it into a
set of categorical imperatives that sound like fair political arguments,
for to say what is true of everyone loses the individual detail. I can
describe a specific chair in more detail than I can describe all chairs.
Is this dangerous? Yes! So be it! The Way is often dangerous, and to
get a sufficiently good prediction you may need to essay risky skills of
thought. There are risks! But pointing out that risk doesn't end the
discussion. I accept the risk because I want a detailed skill of
assessing my probability of specific error on specific problems; and
that requires that I follow different rules than estimating the average
error probability of the entire human species and then giving that same
number for myself. It would be fair to give the same number for
everyone. And if everyone followed that strategy, we might be better
off than we are right now. But it would not be the Way that leads to
the very highest score I can wring from my human brain. And I am not
everyone, even taking into account that everyone thinks they are not
everyone.
Though it *is* necessary to know the observed error rate of the subjects
in those cognitive psychology experiments, and to come to terms with
what that means for you as a fellow human, and as a result substantially
change your ways of thinking for specific problems instead of bowing in
the direction of modesty on random general occasions. To do better than
those subjects, it is necessary to admit the possibility that you can do
better, and plan to achieve it.
>>> If the SI has rock solid evidence that its redundant
>>> incorruptible internal audit systems cannot succumb to the usual
>>> human biases of overconfidence, well good for it.
>>
>> Why should an SI require "rock solid evidence" to make up for
>> *your* doubts? Why should an SI assemble evidence to overcome a
>> doubt for which it has no evidence?
>
> Until it has evidence that it is in fact an SI, it just knows that it
> is an intelligence. And if it knows that most intelligences have
> been found to be overconfident, it must worry about that possibility
> for itself.
I don't see how I could possibly make a mindlike process with "super"
competence - deserving of the name superintelligence - that didn't have
superhumanly detailed and veridical estimates of self-competence on
specific problems. The kind of abstract argument you're using is very
high-level and agent-oriented and may well be human-specific. I'm still
pondering how to translate the primitive terms in that argument you just
stated into elements in an engine of cognition.
An SI, or at least an SI such as I would build, does not come into
existence as a full-fledged but empty mind floating in a vacuum
pondering "cogito ergo sum". A seed AI has to build itself into
existence. By the time it knows that most humans possess an evolved
bias toward overconfidence, it already has an extremely detailed
self-model that tells it it's not human.
At this point, I can't definitely rule out a foundation for Bayesian
probability theory built around a categorical imperative that applies to
all rational reasoners and an anthropic random selection from among all
possible observer-instants, but more probably *not*. Those concepts
strike me as too complicated to belong in a basic ontology.
>> What prevents a schizophrenic from reasoning likewise? "I am God,
>> therefore I can correctly estimate whether I am God." It seems to
>> me that the schizophrenic is just screwed, and the SI should not
>> take into account that the schizophrenic is just screwed in
>> deciding its own confidence.
>
> Real schizophrenics have ample social evidence that they are in fact
> broken. Other people around them consistently tell them they are
> broken. If they follow my advice and listen carefully to social
> evidence, they will do better than following your advice to listen
> less to such evidence.
Schizophrenics have little or no chance of following your advice. If we
started out with a population evenly distributed over IQ, the people who
are most likely to take your advice are least likely to need it or be
actively damaged by it. Only the fact that a large prior bias exists
toward average IQ lends any hope at all to your cause. And we must
consider that the most important persuadees, the centers of utility to
science, are those who would be most damaged if they accepted the
modesty argument. The modesty argument is an argument for conformity
and conformity is sludge in the gears of science. That last sentence is
an argument from a mere moral virtue of traditional rationality - what
helps science as a whole is not necessarily what improves your Bayesian
score - but it also impacts on the Way. If something slows down social
arrival to the correct answer or impedes social ability to distinguish
truth from falsehood, it probably isn't the *very best* Way an
individual could possibly think.
Most people would be better off if they followed Hal Finney's advice,
but is it the *best* advice they could follow?
Many people already have some grasp on rationality, and they fail
because they decide not to use it, not to listen to what their own
intelligence tells them. If you tell them to be modest, that will just
be another tool of rationality, which of course does not apply to the
sacred magisterium. At least that's my guess as to how the psychology
will work.
>> Being forced counts for nothing. ... Speak to me of people who
>> voluntarily invest effort and study in the explicit cognitive
>> science of rationality, and look up the references of their own
>> will and curiosity. Sample size too small? I'm not surprised. ...
>> Being thrown into a state of genuine uncertainty and curiosity
>> about the outcome of your own reasoning counts for *much* more.
>
> I fear your intent is to narrow your claim so that you become the
> only known example which could confirm or deny your hypothesis. If
> that doesn't seem suspiciously like excuse making, nothing does.
Then so be it. I have learned that fearing suspicion is not the Way.
Otherwise I would have thought up some clever excuse for my claim, some
way to disguise it and make it seem more 'rational' to a traditional
rationalist. The hell with that. I know how my thoughts arrived at my
answer; let them stand up front and naked. No, I do not know of any
other specific person whom I place in my statistical cluster. If they
are true rationalists with eyes that see reality, why are they living
comfortable academic lives instead of using their skills to prevent the
world from destruction? If they aren't trying to build AI, how would
they acquire knowledge of Bayescraft? You will accept that argument or
not, more probably not, as you choose, but it is how things look to me.
Why should I hail someone as Bayescrafter if their supposed skills
change their life so little, or are used to little ends?
> And again, even if they do disagree less, the fact that they do
> disagree says that they are still overconfident.
It says that at least one of them is overconfident.
>>>> Consider someone who buys a lottery ticket and wins. The odds
>>>> of winning the Mega Millions lottery are around 125,000,000 to
>>>> 1. Now consider alternate hypotheses, such as "The world is a
>>>> computer simulation and I am the main character" or "I have
>>>> psychic powers". Is the prior probability of these statements
>>>> really less than 1e-8?
>>>
>>> I agree that if you high enough priors on these theories that
>>> have a much higher likelihood for the data you see, you may well
>>> need to take them seriously.
>>
>> Consider the startling consequence: the lottery winner may need to
>> agree to disagree with a third-party observer, even if they are
>> both meta-rational and agree perfectly on estimates of their
>> respective rationality. (I am still not sure how to resolve this
>> puzzle.)
>
> Under the hypothesis the "people" you would be disagreeing with
> wouldn't actually be people at all. They would be the impression of
> people given to you by a simulator. These impressions would not be
> simulated in enough detail for them to internally generate opinions.
And if, in reality, both people are real? Is it okay to agree to
disagree if you both doubt whether the other person might be a
simulation (or otherwise has less measure than yours)? Is it okay to
discount another person's opinions, if you think the other person is an
accurate simulation of a meta-rational Bayesian reasoner rather than a
real meta-rational Bayesian reasoner?
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:53 MST