Re: "friendly" humans?

From: Tyrone Pow (tyrone@ubermetal.com)
Date: Thu Jan 01 2004 - 06:38:06 MST


----- Original Message -----

From: "Wei Dai" <weidai@weidai.com>

> On Thu, Jan 01, 2004 at 06:13:19PM +1300, Nick Hay wrote:
> > [...] Were evolution to have honestly represented its intentions
> > and made us, simply and directly, want more children (or, rather, want
> > to become ancestors), then derived everything from that as a subgoal,
> > it'd have had better luck when our environment changed. [...]
>
> I think you missed one of my points. Evolution *did* directly make us want
> more children, not through genes, but through co-evolved symbiotic memes.
> Hence my reference to the "be fruitful and multiply" passage in Genesis 1.
> I think evolution was quite clever here, and came to the same conclusion
> that Eliezer reached: the supergoal needs philosophical support. It could
> not be provided in the genes and had to come from symbiotic memes because
> genes can't evolve fast enough to defend against parasitic memes.
> Unfortunately for the genes, the parasitic memes are now evolving faster
> because of greater communications bandwidth between non-relatives so that
> even the symbiotic memes can't catch up.

Since when do humans inherently want 'more' children? To address about half
of the human population, men have always have been driven through the mating
game by a desire for sex, not offspring. The advent of birth control didn't
bend any supergoal justification philosophies; it merely allowed men to get
what they want _without_ leaving behind a responsibility trail. Evolution
gave us a host of motivations, many of which are driven by unconscious and
conceptually-indirect means. Philosophical *justifications* are a fickle
signature of culture, not evolution.

> > I agree that our attempts at deceiving an SI will likely fall short. I
> > think that your attempted solution will likely fall short, just like
> > any other methods we try to think up (as described below). But why try
> > this in the first place? Why treat this SI as an adversary to be bound
> > to our will? In so much as we're creating a mind, why not transfer the
> > desire to do good, in a cooperative sense, rather than attempting to
> > apply corrections to some "innate nature"? In any case, the adversarial
> > attitude, as this stance is termed in CFAI, appears pretty much
> > unworkable.
>
> There are three possibilities that we can't rule out at this point. 1) Any
> SI will have a natural tendency towards doing good. 2) An SI will always
> find philosophical justifications for doing good convincing if we seed
> it with the right initial philosophy. 3) An SI will find these
> philosophical justifications silly in some situations. I'm arguing that
> case 3 is likely, and therefore we need to reduce the likelihood or
> frequency of those situations as much as possible. I'm pointing out that
> in our own case we realized the silliness of the justifications for
> maximizing the number of biological offspring after being "infected" with
> parasitic memes, and therefore we should try to prevent the analogous
> thing from happening with SIs.
>
> (Of course there's possibility 4, that an SI will always find
> justifications for doing good silly, but there's not much point in
> worrying about that one.)

Lately, there's been too much talk on this list of what's technically
possible- and logical justifications to the wind. Morality and intelligence
are both philosophically (and scientifically, if you're talking about
practical implementation) loaded guns, the former being a tangible property
only in the abstract. To start implying that one has some implicit
correlation with the other is reducing both to low-resolution symbols,
pasting them together with some sort of theoretical, yet totally undefined,
glue.

Tyrone Pow
www.tyronepow.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT