From: Samantha Atkins (firstname.lastname@example.org)
Date: Sat Jun 15 2002 - 17:38:15 MDT
Ben Goertzel wrote:
> You wrote:
>>Well, in my personal opinion and speaking plainly, disowning
>>one's species, literally all of humanity include one's self and
>>loved ones, for an unknown set of hopefully stable higher
>>intelligences is a very deep form of treason and betrays a
>>singular contempt for humanity that I find utterly apalling.
> But I really feel this is not correct.
> The attitude you describe does not necessarily imply a *contempt* for
> On the contrary, it *could* imply this, but it could also imply a mere
> *indifference* to humanity.
How can or should one be "indifferent" to one's species and to
the survival of all existing higher sentients on this planet?
If one is for increasing intelligence (how one defines that and
why it is the only or most primary value are good questions) and
the increase of sentience, I fail to see how one can be cavalier
about the destruction of all currently known sentients. How can
one stand for intelligence and yet not care about billions of
intelligent beings that already exist?
> I confess that I feel somewhat indifferent to humanity, sometimes (not
> always!). Sometimes I just think of humanity as a vehicle for intelligent
> mind. And, I think: If a better vehicle comes along, why is the
> human-vehicle so important? And why are our individual human minds --
> including mine -- so important. In the big picture of the evolution of life
> and mind in the cosmos, surely they aren't....
Sentient beings are important. Wiping out existing ones for the
next year's model is not exactly ethically neutral. If we do
not care about sentient beings as long as something we thing
might be more powerful is on the horizon then we are uncaring of
ourselves and our fellow beings. There is no getting around this.
> Sure, the human race is important to me emotionally... just like my family
> is important to me emotionally ... just as my own limbs are important to me
> emotionally ... but why are my personal human emotions so important?
Why would you disown what is of value to you? On the basis of a
hypothetically better intelligence (along some dimensions of
"better")? Why would you and how is it justified to also
casually speak of "it" being more important than not only your
own values and life but that of all other human beings also?
All I am pushing for here is that we pause before writing off
the human race as inconsequential as long as we can get to
Singularity. To do so writes off the known for something that
is unknown and unknowable. That does not seem reasonable to me.
> Indifference to humanity could come out of nihilism. But it could also come
> out of having values that go beyond any particular species, or any
> particular vehicle, and focus on general things like intelligence and
> In short, there are MANY different psychological motives that could underly
> the attitude Mike displays, contempt being only one of them.
>>How about we just grow a lot more sane human beings instead of
>>digging continuously for technological fixes that really aren't
>>fixes to the too often cussedness of local sentients? Replacing
>>them with something that is faster and arguably smarter but may
>>or may not be any more wise is not an answer. Scrapping
>>sentients is to be frowned upon even if you think you can and
>>even do create sentients that are arguably better along some
> "Wisdom" is a nebulous human concept that means different things to
> different people, and in different cultures.
I can tell you what it isn't. It isn't about writing off all
existing sentients in favor of something you think can be but
you have no idea what it will become. Human is the only
sentient basis we have to reason from and in any event is what
we ourselves are and we have no choice but to reason from that
> However, I think it's pretty likely that intelligent software WILL be wiser
> than humans, due to reasons Eliezer has pointed out nicely in his writings.
> We have an evolutionary heritage that makes it really tough for us to be
> wise, and there seems to be no reason why intelligent software would have
> any similar problem.
It is not at all clear that not having an evolutionary history
or processing more information faster leads to wisdom or
"better" for sufficient values of "better". How will we even
evaluate the question? What are the criteria and how do we know
they are the correct criteria? I think we should be very sure
of the answers to such questions since it is nothing less than
the survival of the human race that is at stake.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT