From: Evan Reese (firstname.lastname@example.org)
Date: Sun Apr 14 2002 - 10:36:49 MDT
Now, I think we're finally getting to the heart of things.
From: "Ben Goertzel" <email@example.com>
Subject: RE: Why bother (was Re: Introducing myself)
> Here is one justification for the belief that "working toward the
> Singularity is important."
> Consider the probability
> p(t) = the probability that, during year t, some nutcase will wipe out the
> human race via biological terrorism or some other means
> You estimate that p(t)~=0, with a fairly high subjective confidence
Strictly speaking, no. After all, the probability of all the air in a room
gathering itself into one-half the space leaving the rest of the room in
vacuum is not zero; something on the order of 10 ** - 80 or so. The odds of
someone wiping out humanity might be higher than that, it is more difficult
to quantify than kinetic motion, but I don't take the possibility seriously.
> I think that your confidence is primarily not based on objective factors
And your evaluations are? What have I been asking for all this time? If
you think you have objective factors for your viewpoint then why don't you
bring them out? What have I been asking you for all this time? How many
times do I need to repeat it?
> I believe that p(t) increases over time -- a reasonable, though not
> belief, based on the advent of more & more sophisticated technologies --
I believe the opposite, for the same reason. You still haven't addressed my
example of computer viruses that I mentioned in an earlier posting. By your
logic, there should be an increasing probability of some computer virus
wiping out the Internet. But despite the fact that more people are writing
them than ever before, and their spread is much easier than it was even 5
years ago, the damage they do is proportionately less than it was beck when
Robert Morris sent out his Internet worm. How do you explain that?
I explain it by saying that increased technological sophistication allows
for greater abilities to deal with threats. Yes, the power of individuals
to do harm grows, but the power of the system as a whole to respond grows
faster. AS long as knowledge about these threats - including nanotech,
biotech, software etc. - is not bottled up, then I do not see why the same
logic should not apply to these threats as it does to that of viruses.
If you have reasons why it shouldn't, then by all means let's discuss them.
I really want to hear them.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT