Singularity Objections: Friendliness, desirability

From: Thomas McCabe (pphysics141@gmail.com)
Date: Tue Jan 29 2008 - 13:41:27 MST


 Desirability

    * A post-Singularity mankind won't be anything like the humanity
we know, regardless of whether it's a positive or negative Singularity
- therefore it's irrelevant whether we get a positive or negative
Singularity.

[edit]
It's unethical to build AIs as willing slaves.

(an example of this objection)

There are two parts to this objection. For one, it could be argued
that it's unethical to restrict a mind's freedom of choice. But if you
have the freedom to build a mind with an arbitrary set of desires,
what level of uncertainty would need to be incorporated before the
programmed choice no longer was a programmed choice? Would it have a
true choice if you estimated that it chose things in a certain way 90%
of the time? 70%? 50%? Is it only ethical to craft minds for as long
as you are lousy in the art of mindcraft, and don't even know how to
estimate those probabilities? But that would be saying that it's only
ethical to build minds when you have no clue of what they will do to
their environment and others. That wouldn't be ethical - that would be
criminally irresponsible.

It could be suggested that it'd be more ethical to simply treat the
created AI well, so that it would find the choice of helping humanity
attractive. But that argument only works if you can only build a
certain kind of mind - for instance, if you can only build very
human-like minds. When you are free to define all of a mind's
preferences, what's the difference between making it an attractive
option to assist humans and programming it into a certain decision? We
easily think that certain things are more "natural" for minds to
prefer than others, because our we have evolved to consider them
inherently natural. But ultimately there's no reason for why it'd be
right or wrong to make a mind prefer a certain sort of treatment over
the other, or why it'd be right or wrong to make a mind prefer acting
in certain ways.

    * You can't suffer if you're dead, therefore AIs wiping out
humanity isn't a bad thing.
          o Rebuttal synopsis: it seems very plausible that AIs wiping
out humanity would cause immense suffering during the process.
Furthermore, it would be a horrible waste to let humanity be destroyed
when a positive Singularity could create a humanity with no
non-voluntary suffering at all.
    * Humanity should be in charge of its own destiny, not machines.
          o Rebuttal synopsis: then we should build AIs to implement a
program such as CEV that helps humans take charge of their destiny.
    * A perfectly Friendly AI would do everything for us, making life
boring and not worth living.
          o Rebuttal synopsis: An AI that would make life boring and
not worth living would by definition not be perfectly Friendly. If
there is some optimal level of adversity that humans need in order to
thrive, then a perfectly Friendly AI would create a world where
everybody faced that optimal level - assuming they didn't want to
modify their psyches to require a different level of adversity.
    * The solution to the problems that humanity faces cannot involve
more technology, especially such a dangerous technology as AGI, as
technology itself is part of the problem.
    * No problems that could possibly be solved through AGI/MNT/the
Singularity are worth the extreme existential risk incurred through
developing the relevant technology/triggering the relevant event.
          o Rebuttal synopsis: the Singularity will eventually be
triggered anyway. We aren't aiming to trigger it as fast as possible,
we're aiming to trigger it as safely as possible.
    * A human-Friendly AI would ignore the desires of other sentients,
such as uploads/robots/aliens/animals.
          o Rebuttal synopsis: Preferably, Friendly AIs would be built
to be Friendly towards all sentient life.

 - Tom



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT