From: Byrne Hobart (sometimesfunnyalwaysright@gmail.com)
Date: Tue Aug 14 2007 - 08:34:15 MDT
> The general principle behind that affirmation also implies that
> someone's aptitude for doing fast linear algebra and Fourier
> transforms in their heads (basically autistic savants (I'm not bashing
> autistics, there's a chance I'm one)) correlates to their aptitude for
> scientific (numerical) programming. Do you really want to uphold a
> theory that implies that?
All else being equal, of course it does! If nothing else, it means they
spend more time thinking about hard problems because they spend less time
looking for their calculator. But in a more serious sense, 1) if there isn't
a direct correlation, being able to solve certain problems easily will
affect which problems you consider solvable -- and, again all else being
equal, the people who make the most progress are the ones who think we can
make the most progress. 2) there is a correlation, because the variable IQ
measures is predictive across numerous fields (
http://www.gnxp.com/blog/2007/03/g-precis.php ).
Oh well. I think I'm killing a fly with a cannonball by now. But
> really, the discussion on /what/ will differentiate humans in their
> ability to relate to post-singularity Powers or whatever (I'm not keen
> at sci-fi words; the singularity /is/ an event horizon) can't be based
> on their current similarity to what the Powers would work like. I'd
> venture it's the opposite -- what matters is how COMPLEMENTARY to the
> Powers a person is.
I'm not sure. When Northern Europe emerged from the Dark Ages, literacy was
an increasingly valuable skill because it allowed people to interface with
wealthy and knowledgeable elites. A skill complementary to that of effete,
wealthy scientist/statesmen like Newton would be the ability to bash in
skulls using blunt objects, but the demand for skull-bashers was not, as far
as I know, rising during the Renaissance.
AIs thrive in an environment that values intelligence. AIs will also be
powerful enough to change their environment. When we reach an equilibrium, I
think it'll devalue many human traits (interpersonal skills, for example,
won't mean much if most emotional responses are an avoidable cost).
This all, of course, is based on the rather unpleasant scenario of an
> "unfriendly" singulatiy where we're all reduced to serving a godlike
> Power's needs.
We'll always be constrained by somehow -- I'd probably give up a large
fraction of my political and economic liberty in exchange for faster thought
and statistical immortality, and that's probably the deal we're going to
get. The AI's pitch will be, roughly, "I can allocate resources more
effectively than you, and I'm constantly getting better. But given guidance
from me, you're worth more than it costs to keep you around. So I'll keep
you around, as long as you do what I say -- and you'll live a longer,
happier life for it." The AI isn't a politician or a cult leader; it's not
making zero-sum allocations like everyone else who makes that kind of
promise.
In a rather tangential sidenote, isn't the current research in the
> mood mechanisms of the human brain transhumanist at some level? I know
> I feel augmented by the psychotropic medication I've been taking, the
> way (though not in the same magnitude) I venture I'd feel if I had
> coprocessor chips implanted.
The first step on the road to the Singularity: the invention of beer.
And this is a good point: between coffee and pot, we've all gotten a lot
better at using chemicals to regulate our mood so we're productive when we
have to be and happy when we want to be. Unfortunately, I don't think
there's a huge amount of progress possible through psychotropics. If we get
to the point where mood is always aligned with need, how much more
productive will we be? It probably varies from one field to another, but
there are limits (and that's ignoring the long-term effect of these
chemicals; jury's still out, except where it's in and the verdict is bad).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT