RE: Ethical theories

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Feb 04 2004 - 08:30:07 MST


It occurs to me that the meta-goal of science,

"Create conjectures with more empirical support than their predecessors"

also has some hidden relativism in it, in the sense that what it really
means is

"Create conjectures that have more empirical support than their
predecessors, as judged by some community C"

Godel's theorem (properly deployed) shows that for any community C (with
finite total compute power) there are some conjectures whose degree of
empirical support can't be estimated by community C.

Now, we could say

"Create conjectures that have more empirical support than their
predecessors, as judged by a maximally intelligent, rational, self-aware
mind"

(and we could try to formalize this appropriately)

but this isn't useful as we don't have this uber-mind at hand.

Or we could say

"Create conjectures that have more empirical support than their
predecessors, and create a community of empirical-support-judgers whose
intelligence, rationality and self-awareness is progressively increasing"

This is analogous to the meta-ethic

"Iteratively create rule-sets that will be accepted, by a community of minds
that are increasingly intelligent, rational and self-aware"

-- Ben G

> -----Original Message-----
> From: Ben Goertzel [mailto:ben@goertzel.org]
> Sent: Wednesday, February 04, 2004 8:56 AM
> To: sl4@sl4.org
> Subject: RE: Ethical theories
>
>
>
> Hi,
>
> > ### These two quotes from separate posts jibe with what I think
> about the
> > initial problem you posited, the description of methods for
> evaluation of
> > ethical systems in the abstract, a metaethics. Metaethics would
> have to be
> > independent of the qualia of desire and the application of
> these qualia to
> > evaluation of outcomes - otherwise it would be identical to
> ethics itself.
>
> Yes, you have expressed very well what I would like to see a
> "metaethics" be...
>
> > Yet, most of the discussion so far focused on what we (or
> imaginary beings
> > such as the Buddha AI or Friendly AI) might think about
> > particular systems,
> > seen as tools for achieving goals that suit our fancy, such as joyous
> > growth, enlightenment, the intellectual pleasure associated with
> > absence of
> > contradictions in thinking (consistency), etc - in fact,
> > confining itself to
> > observer-dependent ethics.
>
> I agree -- rather than a true meta-ethics, something like "joyous
> growth" is in fact a *maximally abstracted ethical principle*.
> As compared to particular ethical rules like "don't kill people",
> this is moving in the direction of a meta-ethic, but it's still
> not a meta-ethic...
>
> > Let me first observe that as you write above, a meta-statement
> can be made
> > about the overall goal of science - although instead of Popper's
> > injunction
> > I would say "Create conjectures that come true".
>
> Obviously, Popper intentionally avoided this phrasing to avoid
> quibbles about the philosophy of "truth"... your phrasing is fine
> with me though...
>
> > How can we use the above observations for deriving a metaethics
> by analogy
> > to metascience, without direct recourse to desires and ethics itself?
> >
> > I think we could begin by making the metaethical statement
> > "Formulate rules
> > which will be accepted" (although this statement is actually a
> high-level
> > link in a very long-term recursive mental process, rather than
> a starting
> > logical premise).
>
> That's interesting. It's a little deeper than it seems at first,
> and I need to think about it more.
>
> At first it seems a pure triviality, but then you realize what
> the preconditions are, in order for the statement to be
> meaningful. For "be accepted" to be meaningful, one needs to
> assume there is some mind or community of minds that has the
> intelligence and the freedom to accept or to not accept. So one
> is implicitly assuming the existence of mind and freedom. So
> your rule is really equivalent to
>
> "Ensure that one or more minds with some form of volition exist,
> and then formulate rules that these minds will 'freely' choose to accept"
>
> If we define happiness_* (one variant of the vague notion of
> "happiness") as "the state of mind a volitional agent assumes
> when it's obtained what it wants", then your rule is really equivalent to
>
> "Ensure that one or more minds with some form of volition exist,
> and then formulate rules that these minds will 'freely' choose to
> accept, because they assess that accepting these rules will bring
> them an acceptable level of happiness_*"
>
> My point in tautologously unfolding your rule in this way, is to
> show that (as you obviously realize) it contains more than it
> might at first appear to...
>
> However, the shortcoming it has, is that it doesn't protect
> against minds being stupid and self-delusional. Volitional
> agents may accept something even if it's bad for them in many
> senses. (This is because happiness_* is not the only meaningful
> sense of happiness).
>
> The problem is that "to accept" is a vague and somewhat screwy
> notion, particularly when you're dealing with minds that are
> stupid and self-conflicted (like most human minds). Just because
> a human says "I accept X" doesn't mean that all parts of them
> accept X -- it just means that the subsystems that's grabbed
> control of the mouth says "I accept X." It also doesn't mean
> they really understand what X means -- they may accept X out of
> some dumb misunderstanding of what the consequences of X will be.
> Acceptance becomes more and more meaningful, the smarter and
> more self-aware the acceptor becomes.
>
> And this is where "growth" seems to play a role, if you interpret
> "growth" as meaning "minds becoming less stupid and
> self-delusional over time."
>
> So I want to say something like
>
> "Iteratively create rule-sets that will be accepted, by a
> community of minds that are increasingly intelligent and self-aware"
>
> Now, you could argue that this veers further away from being a
> pure meta-ethic. Or, you could argue that "accepting a rule"
> means more for a mind that is more intelligent and self-aware, so
> that it actually makes your meta-ethic more powerful and meaningful.
>
> But of course my modified version is basically a variant of my
> Principle of Joyous Growth -- since I have growth in there in the
> form of "increasing intelligence and self-awareness", and
> "joyousness" in the form of happiness_*.
>
>
> -- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT