From: Samantha Atkins (samantha@objectent.com)
Date: Tue Mar 27 2001 - 01:14:55 MST
"Christian L." wrote:
> > > I fail to see the need for discussing concepts like good, evil, morality
> >or
> > > ethics at all, or how a Power/SI would relate to them.
> >
> >This seems like a very limited and inhuman[e] view.
>
> Well, the SI is non-human after all (see above), and it is the actions of
> the SI that will matter post-singularity.
>
That misses the point I was attempting to make. Any sentient among
other sentients requires some system of ethics, of fundamental working
principles governing association with others. Even, "I wish to do this
and there is no being with the power to stop me so I will" is a form of
ethics. If that was the only ethics we could predict an SI will have
then we would have very reasonable basis to consider the acheivement of
such a being extremely problematic and not so much an acheivement as the
creation and letting loose of a super-powerful, super-intelligent demon
of our own making.
> >If there is no
> >ethics then there are no guiding principles to your actions, no
> >abstraction of what is truly in your self-interest longterm or not
>
> Since it is my belief that the post-singularity world will be unknowable, my
> definition of long-term is on the order of 20-25 years. My guiding
> principles is reaching singularity as fast as possible. If you want to call
> that ethics, that's fine with me.
>
That it is unknowable in any detail does not mean that all principles
and fundamental necessities of peaceful co-existence go away or are no
longer worth talking or thinking about.
> >and
> >every decision governing whether to take an action is utterly
> >seat-of-the-pants at that moment. You also cannot depend on any context
> >for the actions of others as they will make their own seat-of-the-pants
> >(pls excuse physical metaphors) decisions moment by moment.
>
> I agree. Humans act in their own self-interest. They have done so in the
> past, and they will probably continue to do so in the future. However, most
> of the time it is in their own self-interest to be nice to members of their
> "pack".
>
But it is not seat-of-the-pants because that is too chaotic. We instead
develop social norms that, imperfect as they are, allow some semblance
of order and reasonable expectations in our interactions. It is not as
simple as "pack" mentality.
> >There can
> >be no level of trust.
>
> If you organize yourself in a "pack" and follow the rules set up there, you
> can get personal protection and greater means of achieving your goals (they
> normally coincide with those of the pack). When you interact with another
> pack-member, you can be pretty sure that he/she will not break the rules and
> risk exclusion from the pack. This can be called trust. The rules that the
> pack sets up can be called ethics.
>
Why is it that some people in attempting to be scientific relegate human
beings to the level of a wolf pack and expect AIs to have not even that
much social cohesion or sense of ethics? Ethics are not simply pack
mentality.
> > >Ethics seem to be
> > > little more than rules set up by humans in order to maintain a fairly
> >stable
> > > society. I don't see how that can have any meaning in the
> >post-Singularity
> > > world or even in the last years leading up to the Singularity.
> >
> >What, the need goes away for stable associations of entities? How >so?
>
> There will ONE relevant entity. This entity will IMO relate to humans as we
> relate to bacteria. We do not make stable associations with bacteria.
>
How so? I do not believe that there will only be One or that other
beings are irrelevant simply by virtue of being far less intelligent and
capable. If I believed that I would be forced to fight against the
creation of an SI with all of my power.
> Again, the unknowability assumption makes it impossible to predict anything
> IMO.
>
To me this is a huge cop-out. Yes, we may not be able to influence the
design and evolution of SIs much, but if we say simply that we can't
predict everything and therefore will not even try then we deserve
whatever we get.
> >The need for stable associations and for governing primary ethical
> >principle if anything increases as the entities get more powerful and
> >capable of greater harm and as the interactions and activities get
> >orders of magnitude more complex.
>
So you think one SI being alone (assuming this doubtful assumption for
now) has no need of ethics at all?
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT