From: Marcello Mathias Herreshoff (m@marcello.gotdns.com)
Date: Sat Aug 13 2005 - 18:45:18 MDT
On Thu, Aug 11, 2005 at 02:55:26PM +1200, Marc Geddes wrote:
> --- Marcello Mathias Herreshoff
> <m@marcello.gotdns.com> wrote:
> > Stop right there! Questions about morality are not
> > about what sentients
> > *actually* do. They are about what they *should*
> > do. If you found a sentient
> > life form that thinks eating human babies is moral
> > would you change your
> > opinions on baby eating?
--- MORALITY NOT OBJECTIVE ---
I take it by your silence that you concede that that particular experiment
wouldn't work. This is because science can only answer true/false questions,
not right/wrong questions. (I explained this in my first post) Thus any
right/wrong question (ie. morality) isn't objective. I'm not sure how
much clearer I can make this.
> > To summarize, the reason that you can't treat
> > universal morality
> > scientifically is that there are no testable
> > experiments that could
> > demonstrate whether any particular morality is in
> > fact the universal one.
> > There is absolutely nothing wrong with induction.
> > However, induction can't
> > be used here because there is no evidence on which
> > to use it.
> See below. What cognitive principles underpin
> Induction? Induction is what enables us to reason in
> the first place, therefore the general cognitive
> processes that enable Induction to occur must be
> objectively *good*. The cognitive procedures behind
> Induction are open to experimental testing.
--- CONSISTENCY ---
Firstly, when you talk about all logically possible sentients you are
including the inconsistent ones. Needless to say, humans aren't consistent.
Neither are a large portion of logically possible sentients for that matter.
--- MY INDUCTION ---
No. It is true that if I am consistent, then I will first conclude that my
actions are a good thing because they further my goal system, and then I will
conclude that my ability to use Induction and Deduction act in a smarter way
is also good thing.
However, said consistent intelligence would only conclude that *its own*
cognitive abilities are good. As for the cognitive abilities of other
intelligences, it would consider them good or bad depending on its opinion
of their goal systems.
For example: If during you had the ability to travel back in time and turn
Hitler into a moron who would be unable to use induction and deduction, would
you do it?
If you answer no, your hypothetical actions would kill lots of innocent
people. If you answer yes, you have proven that induction and deduction are
not fundamentally good on their own. They are tools of no inherent
goodness or badness.
> > > > Physics isn't Psychology.
> > > Are you sure? Objective Idealism treats physics as a
> > > form of cognition you know. How can something be said
> > > to exist at all if it wasn't being *interpreted* by
> > > some sort of cognitive proccess? Everything you know
> > > about the world requires a mental model to be comprehended you realize?
> > So physics is a form of cognition? Things exist by
> > being interpreted?
> > Alright then! Let's put that to the test! Nope.
> > Sorry. The test object
> > failed to disappear when I stopped thinking about
> > it. I was also completely
> > unsuccessful at telekinesis.
> Objective Idealism does not say that things are
> mind-created. There exists an objective reality
> outside our heads. Objective Idealism says that
> *reality itself* is cognition. Take for instance
> theories of panpsychism, that assign some degree of
> consciousness to everything.
Buy the new and improved Occam's Razor! Only $0 at any retailer!
>
> The theory of computation says that a computation is
> only a computation if there is: (a) Some raw data and
> (b) A Metalanguage assigning *meaning* to the data.
You are confusing two different usages of the word meaning:
1) subjective meaning, as in "that experience had meaning to me"
and 2) technical meaning, as in "the computer groks the meaning of the
bytes on the disk when it executes the program"
These two aren't the same.
You are using (1), and the theory of computation is using (2).
> Since all reality is computation (theory of universal
> computation), reality must have a dual-aspect: being
> able to operate both as data (an object) and it's own
> meta-language (a *subject*).
>
> See my theory on the SL4 Wiki:
>
> http://www.sl4.org/wiki/MarcGeddes/SentientCenteredTheoryOfMetaphysics
> >
> > --- HODGE PODGE ---
> > Now there's a tricky one! How we know which
> > principles are necessary
> > for sentience? Sentience is not a very well
> > understood phenomenon. Of
> > course, if I use Mark Geddes' definition of
> > sentience (if one was proposed)
> > then he will naturally be completely right about
> > everything he says about it.
> >
> > And even if I conceded that definition of the word
> > sentience, we would still
> > have to worry about all the "non-sentients" which
> > might do nasty things like
> > baby munching or tiling the universe with paper
> > clips.
>
> We know what is neccessery for reasoning: Induction
> and Deduction. We know what is neccessery for
> self-awareness: the integration of current experience
> with past memories (John Taylor's Realational Theory).
--- BADLY DEFINED CONSCIOUSNESS ---
"the integration of current experience with past memories"?
You aren't noticing the elephant in the room! You are defining
self-awareness in terms of something called 'experience', which is a synonym.
You have made absolutely no progress until you define one of these "fuzzy
words" in terms of only non-fuzzy words.
> These things constitute sentience. Without the
> ability to reason a mind cannot achieve its goals.
> Without the ability to be self-aware a mind cannot
> experience anything and hence such a mind would not be
> a *moral* subject.
So it wouldn't be a moral subject. Maybe not in our opinion, but it would
certainly peg its own existence as a rather large chunk of utility.
> Induction, Deduction and the Relational Theory are
> well-defined.
See BADLY DEFINED CONSCIOUSNESS.
> Ethically then, the general cognitive
> proccesses underpinning them constitite things that
> must be *objectively* (universally) good. Why?
> because without them, a mind cannot think about ethics
> in the first place, nor can it be a moral subject.
See MY INDUCTION.
> > > Since
> > > brains run on physical laws, there should be general
> > > principles that apply to all sentients.
> >
> > --- COMMON PRINCIPLES ---
> > Firstly I should point out that "principle" has at
> > least two meanings, a
> > physical meaning, as in "the principle of least
> > distance" and a moral
> > meaning, as in "a person of principle". They are
> > very different. You are
> > intentionally or unintentionally mixing them up.
> >
> > So, sure, the laws of electromagnetism apply to all
> > sentients, as do all the
> > other established physical principles. But this
> > does not mean that the
> > resulting intelligences follow common moral
> > principles. No morality is
> > inherent in the universe for reasons I already
> > explained.
> For the reasons I carefully explained above, morality
> *is* inherent in the universe.
See MORALITY NOT OBJECTIVE.
> As I explained, brains
> runs on physical principles. As I explained, brains
> cannot reason without these general well-defined
> principles: Induction, Deduction. As I explained,
> brains cannot experience anything without the general
> property of consciousness (caused by the interaction
> of current experience with past memories).
Brains cannot *experience* anything without consciousness, which is caused by
an interaction involving experience!? If your reasoning were any more
circular you would be nibbling at your own toes!
> Since
> brains cannot reason about ethics and cannot be
> ethical subjects unless Induction, Deduction and
> Qualia are occurring, the general *objective*
> cognitive procedures underpinning Induction, Deduction
> and Consciousness must be *universal* goods,
See MY INDUCTION.
> in the
> sense that *all* ethical sentients must agree that
> they are good (no sentient could claim that they are
> not good without contradiction, since they are
> cognitive processes that enable one to think and be
> self-aware in the first place).
See MY INDUCTION.
> > > Consciousness, Values and Intelligence *are*
> > > fundamental properties of the cosmos that need to be
> > > explained.
> > --- PROGRAM ANALOGY ---
> > Really? Is <insert your favorite large computer
> > program here> a fundamental
> > property of the microchip it is running on? Try
> > imagining your program with
> > some new feature added, or some old feature
> > discarded and you will see why
> > that program was not fundamental. In the same way
> > it is not hard to
> > hypothesize many different versions of
> > consciousness, values and intelligence.
> >
> > In the same way, all three of the properties you
> > mention are no more
> > fundamental to the universe than the program is to
> > the microchip. Our
> > versions of these three things are complex
> > functional adaptations to our
> > evolutionary environment. AIs and aliens will have
> > different versions by
> > reason of different design/evolution.
>
> I'm not using the word 'consciousness' to mean a
> *particular kind* of consciousness. Go look at the
> dictionary definition of 'Conscious'.
OK, I'll try that.
http://en.wikipedia.org/wiki/Consciousness
..."Consciousness is notoriously difficult to define or locate."...
I stand by my point.
> It's a noun -
> I'm talking about *consciousness itself*.
As mentioned before, we do not have a definition of consciousness.
Until we have one, particular kinds of consciousness are the only ones we can
meaningfully talk about.
See BADLY DEFINED CONSCIOUSNESS.
> Take your example above. I'm not talking about *a*
> particular computer program, I'm talking about
> *programs in general*. *A* particular computer
> program is not fundmental, but *computation itself*
> is. In fact Alan Turing's mathematical theory of
> UNIVERSAL computation provides strong evidence that
> computation is a fundamental property of the universe,
> in the sense that computation is everywhere present.
--- PROGRAM ANALOGY CLARIFICATION ---
That is not what I meant by the analogy. This analogy is a mapping between
physical systems and what they do computationally, not a mapping between
conscious systems and computer programs.
Talking about programs in general would be like talking about physical
systems in general. And yes, you do get lots of nice laws.
But that is not the same as talking about certain classes of programs (the
sentient ones or the ones that have values) which are not even clearly
defined subsets of the set of programs and expecting nice theorems to appear.
> Similarly, the general properties *intelligence*,
> *values* and *consciousness* could easily be
> fundamental properties of the universe as well.
We do not even have definitions of these things (though we may have one for
intelligence at some time in the future). In fact, in the cases of values
and consciousness, the only basis for any conclusions are, practically by
definition, personal experiences. Given what complex systems humans are, I
find it highly implausible for these intuitive notions to be fundamental
properties of the universe.
> > The space of all logically possible sentients is
> > absolutely massive!
> > Besides sentience itself, I very much doubt you will
> > find much in common.
> Remember that minds require brains and brains are
> physical objects. If there was nothing in the space
> of all logically possible minds, it would mean that
> one part of physical reality would not be able to
> interact with all other parts of physical reality in a
> consistent way. The laws of physics themselves
> couldn't function. Since the laws of physics *do*
> apepar to be consistent and function the same
> UNIVERSALLY (as far as we can tell), there must be at
> least one thing that all possible minds have in
> common.
Yes! For example the fact that they fall at 9.8 m/s^2 when dropped out a
window on earth (barring air resistance), and many other nice laws of physics.
However, this doesn't mean that the sufficiently smart minds share common
goal systems, as you appear to be arguing.
> > You say my true nature is to be self-aware, to
> > reason and to be altruistic.
> > I might even admit that that is definition of sane
> > humanity's true nature, or
> > at least what we want it to be, albeit an
> > over-simplified one.
> >
> > But, to say that these are the true nature of
> > sentience is another thing all
> > together. You are listing treating three distinct
> > properties as a single
> > unit.
> >
> > All three of these properties are probably as
> > un-fundamental as the existence
> > of rice pudding and income tax. (See PROGRAM
> > ANALOGY)
> >
> See above. I pointed out a mind which cannot reason
> cannot reason about ethics. Therefore the ability to
> reason is a prequiste to ethics. I pointed out that
> reasoning depends on Induction and Deduction, for
> which there are well-defined theories with UNIVERSAL
> applicability. Since reasoning is needed for ethics,
> and since the cognitive processes needed for ethics
> are objective, it follows that the cognitive processes
> needed for reasoning must be *universally good*.
See CONSISTENCY and MY INDUCTION.
> Similarly, with consciousness. A mind which is not
> conscious is not a moral subject. Therefore the
> ability to be conscious is a prequiste to being a
> moral subject. But there's an *objective* theory of
> consciousness - by John Taylor - consciousness is
> caused by the interaction of current experience with
> past memories. Since consciousness is neeeded to be
> an ethical subject and since the cognitive processes
> needed for consciousness are objective, it follows
> that cognitive processes needed for consciousness must
> be *universally good*
See BADLY DEFINED CONSCIOUSNESS.
>
> ALL sentients everywhere, in order to be consistent,
> must conclude that the cognitive proccesses resulting
> in reasoning and consciousness are good. If any
> sentient tried to say that these cognitive proccesses
> were bad, they would be contradicting themslves, since
> without these cognitive proccesses the sentient would
> be unable to reason about ethics in the first place.
See CONSISTENCY and MY INDUCTION again.
>
> This proves Objective morality.
No it doesn't!
-=+Marcello Mathias Herreshoff
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT