From: Olie L (neomorphy@hotmail.com)
Date: Thu Dec 15 2005 - 21:46:02 MST
>From: Samantha Atkins <sjatkins@gmail.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: Destruction of All Humanity
>Date: Thu, 15 Dec 2005 18:34:32 -0800
>
>Better for whom, Ben?
How about better for the aggregate of sentient beings - human and non human
alike?
(I don't believe the following, I'm devilsadvocating)
It's not that uncommon to believe that "the world" would be better off by
killing particular sets of sentient beings - frinstance, the aggregate of
humans would be better off by killing a set of humans (such as Nazis).
It's not so un-logical to extend this to suggest that all sentient beings
could be better off by removing a small proportionate set of sentient
beings, even if that small proportionate set happens to include an entire
species.
If, for example, there were a set of hypothetical insects (very large in
number), that are sentient and highly intelligent, that humans constantly
and unavoidably killed (think of how often we kill ants, whether we mean to
or not), and furthermore, since these intelligent insects happen to appear
repulsive to us, many humans are strongly predilected to wanting to kill
them, even if those humans should happen to learn that those ugly insects
are sentient. Does the idea of asking 5 billion humans to voluntarily end
their lives, in order to save the lives of many many trillions of insects
seem quite so unreasonable, when considering that the plans of humans could
be perpetuated with AI?
Of course, it does seem far-fetched that there wouldn't be "better" viable
alternatives.
==Out of ridiculous exampleville==
Self sacrifice will never seem like the best choice from the personal
self-interest POV of the sacrificee. However, often our own experiences are
less valuable than our interests. We care more for our childrens' wellbeing
than we do for our experiences of our childrens' wellbeing, and this is not
only our hormones doing the thinking... self-sacrifice can be laudable under
most rational moral systems.
Forcibly killing a person for the betterment of others induces queasiness,
but many humans can find this reasonable... a wise being persuading a person
that it might be in the aggregate good to self-sacrifice can invoke the same
queasiness, but that doesn't mean that such actions cannot be intellectually
defensible.
-- Olie
>Do you believe in some Universal Better that trumps
>the very existence of large groups of sentient beings - the only type of
>beings that "better" can have any meaning for? How could it be better to
>an intelligence capable of simulating an entire world and even a universe
>for humans to exist in with an infinitesimal fraction of its abilities? I
>do not believe simply destroying entire species of sentient beings when
>there are viable alternatives could qualify as "better" - certainly not
>form
>the pov of said beings. I don't find it particularly intelligent to use our
>intelligence to make our own utter destruction "reasonable". I would
>fight
>such an AI. I might not last long but I wouldn't simply agree.
>
>-- samantha
>
>
>On 12/12/05, Ben Goertzel <ben@goertzel.org> wrote:
> >
> > Hi,
> >
> > I don't normally respond for other people nor for organizations I
> > don't belong to, but in this case, since no one from SIAI has
> > responded yet and the allegation is so silly, I'll make an exception.
> >
> > No, this is not SIAI's official opinion, and I am also quite sure that
> > it is it not Eliezer's opinion.
> >
> > Whether it is *like* anything Eliezer has ever said is a different
> > question, and depends upon your similarity measure!
> >
> > Speaking for myself now (NOT Eliezer or anyone else): I can imagine a
> > scenario where I created an AGI to decide, based on my own value
> > system, what would be the best outcome for the universe. I can
> > imagine working with this AGI long enough that I really trusted it,
> > and then having this AGI conclude that the best outcome for the
> > universe involves having the human race (including me) stop existing
> > and having our particles used in some different way. I can imagine,
> > in this scenario, having a significant desire to actually go along
> > with the AGI's opinion, though I doubt that I would do so. (Perhaps I
> > would do so if I were wholly convinced that the overall state of the
> > universe would be a LOT better if the human race's particles were thus
> > re-purposed?)
> >
> > And, I suppose someone could twist the above paragraph to say that
> > "Ben Goertzel says if a superintelligence should order all humans to
> > die, then all humans should die." But it would be quite a
> > misrepresentation...
> >
> > -- Ben G
> >
> >
> >
> > On 12/12/05, 1Arcturus <arcturus12453@yahoo.com> wrote:
> > > Someone on the wta-list recently posted an opinion that he attribtuted
> > to
> > > Mr. Yudkowsky, something to the effect that if a superintelligence
> > should
> > > order all humans to die, then all humans should die.
> > > Is that a wild misrepresentation, and like nothing that Mr. Yudkowsky
> > has
> > > ever said?
> > > Or is it in fact his opinion, and that of SIAI?
> > > Just curious...
> > >
> > > gej
> > >
> > > ________________________________
> > > Yahoo! Shopping
> > > Find Great Deals on Holiday Gifts at Yahoo! Shopping
> > >
> > >
> >
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT