From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Tue Mar 20 2007 - 23:54:52 MDT
I may be talking about things I don't understand, *but*:
I guess I'd say that you'd need to have a lot of confidence in your
predictions about the end result. For example, it's a good idea for humans
to avoid low-probability choices that might result in catastrophe, even when
the average case is significantly the best option.
Now, even with Bayes, you it's impossible to prove a negative, so you can
only have relative confidence in your assessments, not absolute confidence.
As such, behind every equation should be the realisation that you might not
know everything. As such, unless you have a lot of certainty, it's probably
still worth avoiding catastrophic possibilities.
Cheers,
-T
On 3/21/07, Chris Hibbert <hibbert@mydruthers.com> wrote:
>
> > How should we respond to future cases where Singularitarian ends
> > truly justify extreme means? This is a basic moral dilemma of all
> > ideologies, especially those which claim to offer absolute welfare to
> > humanity.
>
> I have a different answer than Dagon. Ends don't justify means. Means
> have to be defensible on their own. If certain means are immoral, it
> doesn't matter what ends you are pursuing.
>
> If your ends are really important, find a defensible way to pursue them.
>
> Not everyone agrees with me.
>
> Chris
> --
> It is easy to turn an aquarium into fish soup, but not so
> easy to turn fish soup back into an aquarium.
> -- Lech Walesa on reverting to a market economy.
>
> Chris Hibbert
> hibbert@mydruthers.com
> Blog: http://pancrit.org
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT