Re: Maximize the renormalized human utility function!

From: Michael Anissimov (michaelanissimov@gmail.com)
Date: Thu Aug 10 2006 - 20:57:47 MDT


Jef,

> I like that Eliezer is exploring this area, but I'm far from agreeing
> that this is the overwhelming priority that it's portrayed to be by a
> certain elite set of people.

These people number in the many hundreds or even thousands, and come
from diverse sources... it's unfortunate that you have to take this
tone just because we have different estimates regarding the
technological feasibility of AGI. Nick Bostrom views Friendly AI as
an overwhelming priority, for example, and it's very difficult to
accuse him of elitism or closed-mindedness. In fact, he is better
educated than all of us and his worldview is the paragon of objective
neutrality.

This list is called "SL4", so you should expect people here to hold
SL4 views. It's okay if you're SL3 - just because SL4 has a higher
number attached to it doesn't necessarily mean it's better or more
correct, just more extreme-sounding. But it's useless to talk to an
SL4 with SL3 or SL2 language.

Not to discourage you from posting here - what you have to say is
welcome - but I really do think that you will find more people that
share your views on the extropians and wta-talk lists. Us, well, we
seem to be consumed by this concept known as the hard takeoff...

With regards to your remarks on morality being the product of billions
of years of synergetic evolution, there's an interesting experiment
for me to bring up. If I had an atomic-level scanner and
sophisticated nanotechnology, I could simply pick the most benevolent
person (probably a girl) that I know, and press a button to copy them
repeatedly. I would be making the world a better place, without
understanding an iota about the billions of years of synergetic
evolution underlying morality. Small wonder!

> Well, I'm not trying to win an argument here so I'll let it rest, but
> I do occasionally wish to contribute what I see as a worthwhile view
> even if it doesn't support this particular version of Pascal's Wager.

Again, you're using spiteful and frustrated language here, which is unnecessary.

-- 
Michael Anissimov
Lifeboat Foundation      http://lifeboat.com
http://acceleratingfuture.com/michael/blog


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT