Re: Humans are not equal.

From: Chris Capel (
Date: Mon Jun 06 2005 - 23:44:11 MDT

On 6/6/05, Russell Wallace <> wrote:
> On 6/7/05, Joel Peter William Pitt <> wrote:
> > So this got me thinking once we are able to read minds and an FAI can
> > upload us while also examining our memories, what should happen when
> > crimes that have never been revealed or had justice done come to light
> > (I'm talking about major offenses like murder, rape, assault)?
> >
> > Should individuals memories be off limits for examination by FAI or
> > any information found just ignored?
> My opinion on this is that the objective of Singularity strategy
> should be to preserve the human race and the potential of sentient
> life. We should _not_ expect FAI to be a magic genie for solving all
> the world's problems - if it can stop evolution moving past the region
> of state space in which sentience exists, that'll be more than enough.
> Now, trying to use it to solve all unsolved crimes strikes me as
> dangerous - sure, it would be nice to catch murderers that way as far
> as it goes; then what happens if the RIAA chip in their desire to use
> it to catch everyone who's ever copied music? This gets very messy and
> very dangerous very quickly. I'm inclined to think we should
> concentrate on using FAI to save the world and leave criminal justice
> to the mechanisms that have been established to handle it, however
> imperfect those mechanisms may at times be.

And I'm inclined to think that our current justice systems are
incredibly primitive. With a good solid system of filtering and
combining the thoughts and opinions, the dialectic, of large numbers
of intelligent people, we have the potential to create, even with our
current limited intelligence, solutions to problems like criminal
justice that are many times better than what we can come up with and
implement nowadays. And this is stuff that doesn't even require
transhuman ideas. Before AI, before augmented intelligence. Simply
taking advantage of better communication channels made possible by
applying collaborative filtering and classification to discussions.

So if we bring an superintelligence into the picture, I think that the
least I'd want from vim would be some sort of impetus for the
world-wide installation of a truly effective system of human
self-government. And possibilities like that are only what I can
imagine. An AI could probably do much better.

Petting squabbles instigated by dying monopolies cynically
manipulating broken intellectual property law will be the furthest
thing from our minds. Trivialities like those will be the first things
to be solved.

Moving the topic along a bit, let me make the observation that most
political problems in stable, reasonably open governments seem to be
made possible by an incredibly large amount of apathy in the general
populace. Would it be moral for a superintelligence to cause people to
realize the significance they could play in the political process?
To--I don't want to say "force"--people to wake up to the issues of
governance they ought to be aware of?

Hmm. Is that very off-topic?

Chris Capel

"What is it like to be a bat? What is it like to bat a bee? What is it
like to be a bee being batted? What is it like to be a batted bee?"
-- The Mind's I (Hofstadter, Dennet)

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT