Re: Continuing Evolution in Humans (was: A very surreal day)

From: Diego Navarro (the.electric.me@gmail.com)
Date: Tue Aug 14 2007 - 07:40:47 MDT


> > >No. Once AIs take off, IQ will be a measure of how closely you can relate
> to
> > >the most powerful entities in your society. It's going to be something
> akin
> > >to living in the Third World and speaking fluent English.
> >
> > Define the "dropoff" to be the rate at which reduced IQ leads to
> > reduced fitness in the environment we're talking about.

IQ basically measures how apt one is at cracking a certain kind of
computational problem -- generally involving some of the following:

- deductive ability
- ability at "likelihood"-based induction
- pattern-matching
- principal component analysis (not necessarily "classic" PCA)
- clustering

All of these can be solved to some point by computers now. There is no
reason to believe that a "full" AI, however you want to define it,
can't solve all of that. How is Y's ability to perform tasks that X
can also perform can meaningfully (for example, without any
hidden-third-cause tricks) correlate with the ability to relate with
X? (When X is a human being there's a hidden third cause in that the
ability to perform a task correlates with a common cultural background
that bonds X and Y)

The general principle behind that affirmation also implies that
someone's aptitude for doing fast linear algebra and Fourier
transforms in their heads (basically autistic savants (I'm not bashing
autistics, there's a chance I'm one)) correlates to their aptitude for
scientific (numerical) programming. Do you really want to uphold a
theory that implies that?

Allow me to be looser -- two anecdotes:

1) I haven't inverted a matrix by hand or in my head in (many, many)
years and I must have done at most three linear regressions by hand
(and while stuck on a six-hour bus trip with no calculator and nothing
better to do). Yet I'm a working econometrician working daily with
techniques much more sophisticated than linear regressions and (a) I
have never attempted to compute an Arellano-Bond GMM estimator or a
cointegrated vector equilibrium correciton model by hand and (b) I
suck at manual linear algebra **even though I not only understand
theoretical linear algebra but also the econometric theory (which uses
linear algebra as a technique) behind the estimators I produce by
pushing buttons on my very nice professional software package produced
by Quantitative Micro Software Inc.

2) I've taken a graduate-level course (meant for mathematicians) in
image processing algorithms. Don't ask me why -- it would involve
rambling on about bipolar disorders and hypomanic episodes and the
portrait of a young man trying to eat the sun. Anyway, that course --
while it was all taught in a blackboard with the help of only a
handful of image samples -- made me somewhat able at subtle color
manipulation in Adobe Photoshop to obtain certain aesthetic effects
(though I'm told by graphic designers my color tastes suck). Yet most
people who are /very/ apt at image manipulation (and I don't mean the
"erase stretch marks and flaccid buttocks" kind, but the subtle color
"optimization" kind) have no idea of what's going on behind the
scenes. And apparently they get a better result than me.

Oh well. I think I'm killing a fly with a cannonball by now. But
really, the discussion on /what/ will differentiate humans in their
ability to relate to post-singularity Powers or whatever (I'm not keen
at sci-fi words; the singularity /is/ an event horizon) can't be based
on their current similarity to what the Powers would work like. I'd
venture it's the opposite -- what matters is how COMPLEMENTARY to the
Powers a person is.

This all, of course, is based on the rather unpleasant scenario of an
"unfriendly" singulatiy where we're all reduced to serving a godlike
Power's needs.

In a rather tangential sidenote, isn't the current research in the
mood mechanisms of the human brain transhumanist at some level? I know
I feel augmented by the psychotropic medication I've been taking, the
way (though not in the same magnitude) I venture I'd feel if I had
coprocessor chips implanted.

2007/8/14, Daniel <kopacetic101@gmail.com>:
> 'Do you have any insight into how to influence the sharpness of the
> dropoff? If the one or two or ten people who can best interact with
> the AI get everything, and everyone else gets nothing, then the vast
> majority of everybody loses, and specifically I will very likely lose.'
>
> Unfortunately Tim you know as well as I do this is the way the world works.
> I wish I was wrong. The first space travellers will be the wealthy (ref. air
> travel), most likely they will also be the first to have any significant
> interaction with future AI (after the scientists who designed them). If
> "Uploading" were to become feasible in our lifetimes, I doubt I'd be
> anywhere near the foodchain as there would be a queue of billionaires ahead
> of me. It reminds me that in the event of a nuclear war who would be the
> people in the safest environment? I wonder if "Uploading" or interaction
> with a vastly superior intelligence were available today who would be really
> be interacting/uploading?
>
> Daniel
>
>
> On 8/8/07, Tim Freeman <tim@fungible.com> wrote:
> > Hmm, it seems I mangled my post. Sometimes emacs hides things. Trying
> again:
> >
> > From: "Byrne Hobart" <
> sometimesfunnyalwaysright@gmail.com>
> > >No. Once AIs take off, IQ will be a measure of how closely you can relate
> to
> > >the most powerful entities in your society. It's going to be something
> akin
> > >to living in the Third World and speaking fluent English.
> >
> > Define the "dropoff" to be the rate at which reduced IQ leads to
> > reduced fitness in the environment we're talking about.
> >
> > Do you have any insight into how to influence the sharpness of the
> > dropoff? If the one or two or ten people who can best interact with
> > the AI get everything, and everyone else gets nothing, then the vast
> > majority of everybody loses, and specifically I will very likely lose.
> >
> > If we have a choice, I hope we'll get to a situation where the dropoff
> > is gentle or nonexistent, and intelligence matters very little.
> > Otherwise I can't be sure I'll be on the correct side of the dividing
> > line.
> >
> > --
> > Tim Freeman http://www.fungible.com
> tim@fungible.com
> >
>
>

-- 
Novidades aleatórias & comentários snarky: http://twitter.com/doctork
Missão: Teach dumb matter to do the Turing boogie!
Visão: Little fluffy clouds forever


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT