Henry addresses Sully’s vapidness….

The nut of the argument is here from the article Henry links to:

To summarize what follows below (“shorter sloth”, as it were), the case for g rests on a statistical technique, factor analysis, which works solely on correlations between tests. Factor analysis is handy for summarizing data, but can’t tell us where the correlations came from; it always says that there is a general factor whenever there only positive correlations. The appearance of g is a trivial reflection of that correlation structure. A clear example, known since 1916, shows that factor analysis can give the appearance of a general factor when there are actually many thousands of completely independent and equally strong causes at work. Heritability doesn’t distinguish these alternatives either. Exploratory factor analysis being no good at discovering causal structure, it provides no support for the reality of g.

The worst thing about the continuing reliance on factor analysis is that it simply isn’t a tool that can do that. In some sense, back when we didn’t have laptops that could do things in 3 minutes what took 3 days even just 12 years ago let alone 30 years ago, factor analysis provided a very good first step to analyzing data and do what I call feeling out the data.

Those trying to impute causality with factor analysis are wrong on two different levels. First, the method is not capable of doing so–but yet, Charles Murray wrote a book using it at the core of his argument.

Second, correlation does not mean causation. For causal determinations, one must first choose a study design that eliminates other possible explanations and offer a theory that can be tested by a significance test that is capable of accepting or rejecting the null hypothesis. Statistical analysis can contribute to the determination of causation, but only in a very specific study design that produces a controlled set of observations for testing.
In the case of Murray’s work, he doesn’t test his hypothesis, he simply finds a correlation and jumps up and down for hundreds of worthless pages. A bunch of gullible morons like Sullivan think it looks sophisticated and declare those who call bullshit are somehow silencing groundbreaking work. That would be groundbreaking work if it were perhaps done in 1900, not 1995 or 2007. It was shoddy bullshit that never should have been published by a reputable publisher.
I’ll copy the following simply because it is as parsimonious of a statement as possible:

If I take any group of variables which are positively correlated, there will, as a matter of algebraic necessity, be a single dominant general factor, which describes more of the variance than any other, and all of them will be “positively loaded” on this factor, i.e., positively correlated with it. Similarly, if you do hierarchical factor analysis, you will always be able to find a single higher-order factor which loads positively onto the lower-order factors and, through them, the actual observables [8] What psychologists sometimes call the “positive manifold” condition is enough, in and of itself, to guarantee that there will appear to be a general factor. Since intelligence tests are made to correlate with each other, it follows trivially that there must appear to be a general factor of intelligence. This is true whether or not there really is a single variable which explains test scores or not.

These issues are non-trivial. My current boss was trained some decades ago as a psychologist and he always wants to start off with a factor analysis. I usually look up, squint a bit, and shake my head. It’s not that it isn’t useful, it’s just such a basic step, I don’t think of it as anything that interesting to bother mentioning in most work. Of course, he being trained in cognitive psychology understands the limitations of the factor analysis and part of the reason he has had my predecessor and I around is that we know modern statistics and we look up, squint a bit, and shake our heads when he’s all excited about something we aren’t terribly interested in.

and more:

I am not sure what the oddest aspect of this situation is, because there are so many. It may be a statistician’s bias, but the things I keep dwelling on are the failures of methodology, which are not, alas, confined to all-correlations-all-the-time psychologists, but also seen in the right (that is, wrong) sort of labor-market sociologist, economists who regress countries’ growth rates on government policies, etc., etc. As the late sociologist Aage Sorensen said (e.g. here), the sort of social science which tries to identify causal effects by calculating regression coefficients or factor loadings stops where the scientist’s work ought, properly, to begin. (A more charitable view would be that these researchers are piling up descriptions, and hoping that someone will come along, any decade now, with explanations.) Many psychometric and econometric theorists know much better, but they seem to have little influence on practice.

I’d argue that quasi-experimental designs also work very well, though that’s a far longer discussion.