Pre-Singularity human enlightenment

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Sep 20 2002 - 00:09:47 MDT


Samantha Atkins wrote:
>
> I also think that we need to learn to think in a much more "holistic"
> or integral manner with much more real care for the maximization of all
> individual potentials before we get to Singularity. Again, I don't
> see how just getting to Singularity makes this occuring more likely if
> the groundwork has not been laid. Sure the Singularity could simply
> establish this, plus or minus "Transition Guides", but we need as much
> of this as we can possibly get beforehand if we are to survive to
> reach Singularity.

What are you going to do that hasn't been tried in the last 50,000 years?
  Remember, us AIfolk have to deal with the shadow of fifty years of
failure; everyone wants to know what we're going to do that hasn't been
tried before. You have to dig yourself out from under the heap of other
people's failures; show that your thinking is new enough not to belong in
the trash heap with the things that have been tried before.

Why is your thinking that new? Why are your plans that original?

Understand, I am not asking this in order to be difficult. I can think of
at least three things you could try, in terms of promoting greater human
enlightenment before the Singularity, that have never been tried before.
What I want to know is what *you're* thinking of.

It doesn't do much good to issue calls to action without a strategy that
promises to work. Not a strategy that *sounds* good. Not a strategy
that blazes like a banner and uplifts your heart. A strategy which,
unlike the stuff that's been previously tried over the last 50,000 years,
will actually work.

> I am probably expressing this inadequately. But I believe that we
> sometimes make the mistake of overemphasizing the intellect side of SAI
> and underemphasizing its "heart" - the deep appreciation, caring for,
> compassion for, nurturing of all. A god sized being without that
> would be extremely problematic. Our conceptions of SAI are in danger of
> being unbalanced in much the same ways that we ourselves lack balance.

Again: What are you going to do, and why is it going to work when
everything previous has failed? To quote Doonesbury, "Let's kick butt!"
is not a plan unless you know which butts to kick, how far, and in which
direction.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT