From: Olie Lamb (neomorphy@gmail.com)
Date: Sun May 14 2006 - 22:29:24 MDT
Humph. Turn your attention away from SL4 for a little while, and look what
comes up!
Let's start by clarifying some terminology. At least that will (hopefully)
reduce the amount that people are arguing past each other.
On 5/13/06, m.l.vere@durham.ac.uk <m.l.vere@durham.ac.uk> wrote:
> Quoting Phillip Huggan cdnprodigy@yahoo.com:
>
> For the record, I define nihilist as 'one who holds no belief however
> widespread, not supported by proof'.
That's where you differ from a lot of people.
I'd say that most ethicists (including those who identify as nihilists)
would define nihilism as the belief that there can be no _truth_ to moral
entities/facts/statements.
That is ~~ a nihilist either "believes" or "is firmly convinced" that there
is no objective morality.
What you just described is closer to a moral sceptic - one who /doubts/
whether there is any objective morality.
What's the difference? Burden of proof. Nihilists have as much to do to
convince a sceptic as a moral realist.
Of course, a sceptic could also consider that some really nasty "moral
reality" might be the case, where we "ought" to go around torturing each
other, but if you can derive any "ought" from "is", I think you'd be hard
pressed to show any such thing.
(But then again, I'm a slightly sceptical moral realist. Anyhoo...)
I'm tempted to introduce an analogy to the truth value of statements about
the future from determinist and non-determinist. But that would be
off-topic philosobabble :)
... Back on to SL4 relevance
...
The reason why some people keep on barking on about Friendlyness is because
it removes the need to quibble about meta-ethics.
Considering your response to:
> JohnKClark:
>> Should? If you're right then why "should" we do anything? You are saying
we
>> should reject morality because it's the right thing to do and that does
not
>> compute.
>
> Should, not in terms of morality, but in terms of what will produce the
most
> desirable state for us (as individuals).
I'd like you to consider what would, for you as an individual, produce the
most desirable state, as far as AI motivations go...
You want a superintelligent AI to behave nicely... For your future self's
sake, you don't want it to be convinced by nihilism and do you wrong, now,
do you?
Now, consider again your first statement: "We should reject traditional
morality and embrace moral nihilism".
If you mean "It is in our interests to (1) not assume that moral realism is
true and (1) not assume that a clever AI won't be persuaded to be
nihilistic", then you're dead right. It is SL4 relevant to examine whether
a clever AI might develop a meta-ethical view that would cause it to be
UnFriendly, and consequently shit on us.
If you mean "Let's abandon ethics and concentrate on how we can make that AI
do our selfish bidding," then you're a myopic fool.
I'll presume that you 1) are shit-stirring 2) Mean "sceptic" 3) Are actually
considering something that impacts on Friendlyness
--Olie NcLean
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT