From: Samantha Atkins (firstname.lastname@example.org)
Date: Tue Sep 05 2000 - 11:25:36 MDT
"Eliezer S. Yudkowsky" wrote:
d about our reasons for being on this quest.
> I also have to disagree, at least with Gabriel's phrasing. That what we do is
> so incredibly important does not excuse us from critical thinking; rather, it
> makes self-questioning mandatory. But only on the level of beliefs, not of
> actions. We cannot afford to be certain, or to blunt our own perceptions, but
> we cannot afford to be hesitant either. We certainly cannot afford to
> ostentatiously pretend to be more uncertain then we really are, for the sake
> of transhumanist correctness.
What does "correctness" have to do with any of the positions expressed?
I think we are basically in agreement that simply staying honest is
> I find that it is possible to easily blend fanaticism and uncertainty - the
> fact that the probability of an assertion is not 100% doesn't have to
> automatically sequit to lessened enthusiasm, so long as the particular course
> of action is still rationally the best available.
Dunno. My brand of fanaticism is a pretty scary thing. I am one of
those rare people who can turn my life inside out and work 16 hours a
day for long stetches when I am truly fanatic about something. But it
burns me out pretty badly, especially as I get older. So I tend to
avoid it except for things I am really certain are that pressing. I am
not yet convinced going full tilt for Singularity is the best course.
> > Another possible way is to condition, evangelize, teach
> > enough of humanity at least at enough of the key positions a different
> > way of seeing life and what they are doing such that the threat of
> > self-inflicted global catastrophe goes down.
> This is improbable to the point of vanishment. At least AI is an engineering
> project. You're talking about bringing about a basic change in the nature of
> all humanity, enough so that NOBODY starts a nanowar, and doing it without
> ultratechnology and without violating ethics. I just don't see this
> happening. It looks to me like if you started a massive project with a
> hundred billion dollars worth of funding, intended to educate humanity, you
> might become a big media sensation but you would not accomplish enough to save
> the world. Not even close.
Actually I was talking about an engineering project also, memetic
engineering. You would not have to convert all of humanity. Just a
high enough percentage of the most powerful and influential. The meme
system I would try to inculcate is that humanity can have pretty much
all of the things promised by all the pie in the sky there ever was, if
it can embrace the science and technology that will make it so and put
aside the old-style conditioning that prevents it from make this world a
place of plenty and of peace for all people. Right now, very few voices
are putting out such a message. Whether or not it is enough, and I
doubt it is, I think it is very important that those memes become well
planted for what is to come. For that matter I am not sure if it is
possible for people seeding and laying the foundational basis for a
Singularity to do a decent job without having pretty much that memetic
imprint and passing it on. Just a thought.
> > It just goes against my programming, it seems a bit like walking
> > away from the human race to do the alchemistic thing of huddling
> > together with a some like minded folks in front of our supercomputers
> > attempting to give birth to a god.
> Yes, this is pretty much what we're doing. Humanity can sort itself out
> afterwards, after we're no longer tap-dancing across a minefield and we have
> some decent tools.
> The Universe would be a great place if there was a human way to solve this
> human problem. There isn't, and we have to resort to AI. That's pretty much
> all there is to it.
Show me. You are making a level of assertion that I assume has very
thought through logic behind it. I need more of the logic. The
by itself is not enough.
And why should I believe that it is even possible for a hand full of
Singularitarians to birth this godling? I would generally think such an
achievement could only be a capstone on the work of a the best minds of
the entire race. It is not the type of project I would expect a small
succeed at. This AI is going to be starving for input, for information,
for tools. It will take more than this cabal to feed and nourish it
until it can feed itself. Yes, that is a ways down the road.
But a part of me thinks that if the race cannot get itself together
enough to avoid destruction then it is no fit parent of a Power. We,
who cannot deal with and take care of one another, dream of creating a
magic Genie who will, despite being originally designed by us and in our
image, care for us and fix all of our problems and brokenness? Doesn't
that sound just a bit like a form of escapism and the height of wishful
If my 20 years of being a software goddess has taught me anything it has
taught me that regardless of how bright you are and how dedicated you
are, that one or a few people can only do so much in so much time. Even
goddesses have their limits. Yet here we are acting as if a relatively
closed cabal (not even Open Source for instance) can hack together the
very philsopher's stone of Computer Science and far beyond. I believe
several near-impossibilities before breakfast but it is hard for me to
make this project fully real and doable in my mind. And if I get fully
involved I must make it real. It is my nature. If I get involved I
will see this thing succeed or die (well. maybe except for my head)
trying. I don't for a minute think this is a project that has a prayer
of succeeding without that level of dedication. The question is
whether it has a prayer even with it and whether it is, as Eliezer says,
simply the only reasonable thing to do.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT