From: Mark Waser (mwaser@cox.net)
Date: Sat Mar 15 2008 - 20:48:51 MDT
> There are a number of ways in which humans could become extinct without
> our
> goals being stomped on. Human goals are appropriate for survival in a
> primitive world, not a world where we can have everything we want. If you
> want 1000 permanent orgasms or a simulated fantasy world with a magic
> genie,
> then the nanobots go into your brain and your wishes are granted. What
> difference does it make to you if your brain is re-implemented more
> efficiently as gray goo and your body and world are simulated? You're not
> going to know. Does this count as extinction?
If the person being "re-implemented" believes so, then yes. In that case,
you are clearly messing with their goals of not being extinct. You can't be
absolutely sure that your "re-implementation" truly is exactly the same and
that the subject wouldn't know or realize. Doing this against someone's
will is evil.
> But we don't really have a choice over whether there is competition
> between
> groups or not. My bigger concern is the instability of evolution, like a
> plague or population explosion that drastically changes the environment
> and
> reduces the diversity of life. Some of the proposals for controlling the
> outcome of a singularity depend on a controlled catastrophe by setting the
> initial dynamic in the right direction. This is risky because
> catastrophes
> are extremely sensitive to initial conditions. But of course we are in
> the
> midst of one now, a mass extinction larger than any other in the last 3.5
> billion years. We lack the computing power to model it, and there is no
> way
> to acquire it in time because the process itself is needed to produce it.
> So
> it always stays a step ahead. Sorry for the bad news.
Um, I'm missing the bad news (or, at least, how it relates to my proposal).
Could you please clarify?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT