Re: ESSAY: Forward Moral Nihilism

From: M T (nutralivol@yahoo.co.uk)
Date: Sun May 14 2006 - 06:20:01 MDT


> Given this scenario, what "instincts" could one
> define that:
> a) are dependent on nothing that cannot be directly
> sensed by a program
> running on a computer with no predictable set of
> peripherals attached
> b) lead to benevolence actions towards sentient
> entities. (I don't think
> we need to consider benevolence towards
> doorknobs...but what about
> goldfish? Ants? Cockroaches? Wolves? Sheep?)

Given that the AGI will have to eventually sence and
interact with the world, an instinct to be benevolent
towards similar entities to itself might work.

Since the AGI will be self-aware, it will try to
ascertain whether a pattern it encounters (i.e. a
human) is self-aware before taking any actions that
may hurt or aid (disrupt or prolong?) the pattern in
question (since self-awareness is a statistical
rarity).

Comparing a consious human with a doorknob, the AGI
may find that itself is similar 90% to a human and 5%
to the doorknob.

If the relation between similarity and benevolence is
somewhat exponential, a 90% similarity would ensure a
benevolent reaction. (like, 90% similarity = 10^90
benevolence).

I understand that we have used "benevolence"
arbitrarily without defining it. It could be something
like "curiosity towards understanding", or
"inclination towards cooperation" or "refrain from
breaking that pattern". Thats a seperate subject, in
any case.

To sum up: prebuild an instict in an AGI that will
allocate friendlyness towards sensed patterns acording
to similarity with them.

English is not my mother tounge, so try reading
between the lines as much as you can to compensate for
my language use.

And Cockners rules, by the gods.

Michael

Send instant messages to your online friends http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT