From: Randall Randall (randall@randallsquared.com)
Date: Fri Jan 02 2004 - 14:11:09 MST
On Thursday, January 1, 2004, at 12:43 PM, Tommy McCabe wrote:
>
> --- Randall Randall <randall@randallsquared.com>
> wrote:
>> This email edited to remove a very bad word, per
>> the mailing list software, and resent. :)
>>
>> On Wednesday, December 31, 2003, at 08:43 PM, Tommy
>> McCabe wrote:
>> [ about intelligence ]
>>>> Why should we be nearly at the limit? What a
>>>> conincidence, that the very first species to hit
>>>> general intelligence would also be nearly at the
>> limit
>>>> of the architecture they run on.
>>
>> There are lots of coincidences associated with being
>> the first species of persons in our past lightcone,
>> anyway, why would this seem more odd?
>
> Please elaborate.
>
>> If the limit really is just at the upper end of the
>> human spectrum, then it would follow that any mammal
>> brain which achieved human intelligence would be
>> very
>> near or at the limit for similar structures. Given
>> this, it would seem that your question boils down to
>> "why should the limit be so low?", but if it is, the
>> question may not have an answer other than "it's a
>> consequence of physical law".
>
> My question is, of all the places where the limit
> could be, wouldn't it be a very big coincidence thatn
> the limit happened to be at almost the exact level
> represented by the very first species we know of to
> have achieved general intelligence?
Here's an elaboration: You are, no doubt, familiar with
the weak anthropic principle. There are many specific
values of physical constants, which, if different by any
significant amount, would eliminate the possibility of
life as we know it, and therefore the knowing. Now, one
can easily ask "why would the strong nuclear force have
exactly the value it needs to have for us to exist?",
but there doesn't need to be any answer other than "that's
just the way it is". This isn't to say that there *isn't*
some other answer; just that there doesn't logically have
to be.
In the case of intelligence, it might be that strong
superintelligence would inevitably overrun the universe
if it existed (this seems likely), but that human-level
intelligence is much more likely to be localized or
wipe itself out. In this case, the first superintelligence
would preclude any other intelligence from ever arising.
Now, I'm not really arguing that strong superintelligence
is impossible, though I emotionally favor the idea. My
original argument was merely that the belief that strong
superintelligence is impossible is not incompatible with
singularitarianism.
>> One problem here is that I don't know enough to have
>> this argument, by a long, long way. I've already
>> presented my (relatively tenuous) evidence for this,
>> but I'll recap it here for clarity: humans who are
>> highly intelligent seem to have a higher incidence
>> of mental disorder (i.e., insanity).
>
> First, I'd like to see a study or two confirm this.
> Second, if it is true, it could be something limited
> to our particular brain architecture. Third, it could
> be something limited to Darwinistically evolved
> organisms. Fourth, you have to distinguish people who
> are 'insane' (which is bad) from people who simply
> don't follow our semi-arbitrary social conventions
> (which is probably good). There are probably more
> reasons I can't think of.
I have no particular quarrel with any of these. Google
will gladly show you examples of studies about it.
-- Randall Randall randall@randallsquared.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT