From: Michael Anissimov (michaelanissimov@gmail.com)
Date: Wed Aug 16 2006 - 11:11:37 MDT
On 8/15/06, Philip Goetz <philgoetz@gmail.com> wrote:
> So you think goodness and evil are inherent, objective, context-free
> properties of people?
Not really... but this is beside the point for the purposes of what I
was responding to. Yyou're reading too far into my example. The
point was simply that it will eventually be possible to 'manufacture'
kindness without having to struggle through any sort of complex
synergistic process, as Jef Allbright seemed to imply with the
following:
"Note that this approach is inherently evolutionary. There is no
static solution to the moral problem within a coevolutionary scenario.
But there are increasingly effective principles of what works to
maximize the growth of what we increasingly see as increasingly good."
The point is also that, eventually, everything reduces to engineering.
As another example, imagine a social pact where one person's moral
model is automatically updated based on silent requests from people
around them, sent to their brain over a wireless network. This would
be "augmented morality", and it would subtly contradict the implicit
message Jef was putting across when he said,"It's time for humanity to
grow up and begin taking full responsibility for ourselves and our way
forward." It's not "taking full responsibility" per se when you are
using machines to update your morality automatically.
A society full of such individuals may be able to dispense with
discussing things face to face, or holding votes, or engaging in all
the other moral/political activity that humans engage in today.
Jef also said, "We are conditioned to expect that a greater entity
(our parents?, our god?) will know what is best and act in our
interests." The fact of the matter is, a greater and more intelligent
entity might indeed know what is best for us and act in our interests
in ways far deeper, elegant, and long-lasting than we would be able to
act for ourselves. The fear of this outcome stems from so many
powerful humans who say they want the best for us but then abuse their
power. But to deny its possibility is blatant anthropocentrism.
-- Michael Anissimov Lifeboat Foundation http://lifeboat.com http://acceleratingfuture.com/michael/blog
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT