This might seem funny outside of academia, but within academia it’s friggen hysterical. The mild criticisms Levitt levels at Lott wouldn’t even count for interesting tiff at a professional meeting where borish behavior and concescending denouncements of others are the norm. To sue over such mild statements is bizarre and demonstrates just how thin skinned and bizarre John Lott is.
According to Levitt’s book: “When other scholars have tried to replicate [Lott’s] results, they found that right-to-carry laws simply don’t bring down crime.”
But according to Lott’s lawsuit: “In fact, every time that an economist or other researcher has replicated Lott’s research, he or she has confirmed Lott’s conclusion.”
By suggesting that Lott’s results could not be replicated, Levitt is “alleging that Lott falsified his results,” the lawsuit says.
Lott is seeking a court order to block further sales of “Freakonomics” until the offending statements are retracted and changed. He is also seeking unspecified money damages.
Lott acknowledged in the suit that some scholars have disagreed with his conclusions. But he said those researchers used “different data or methods to analyze the relationship between gun-control laws and crime” and made no attempt to “replicate” Lott’s work.
Replicating the results means using different methods, you dumbass. The point of replicability in the scientific method is that one should be able to conduct the research gathering new data and using different, but appropriate methodology to test the same hypotheses.
What’s most disturbing about this is that if Lott were successful, and he won’t be, it would have a chilling effect on peer review and the ability of academics to criticize one another’s work.
Kevin Drum also addresses how Lott is lying again (and sue me John, I double dog dare you)
Lott could actually make a decent saving face argument that while his research was flawed he found an important point that conceal carry doesn’t significantly increase crime which was heavily in dispute when he first did his research. Now there is some discussion over whether there are minor crime increases correlated with conceal carry, but that is far different than dire predictions of years agod.
But no, Lott makes a bigger ass of himself. I’m sure Tim will be having fun with this over at Deltoid–btw, Lott recently left AEI.
Great post!
I thought replication was different data same method? I mean if you are adjusting the model, you are building on the science, not replicating it. It’s been a few years, though.
I also read last year that economists didn’t have to submit their data sets for peer review — even for prestigious journals. It kind of makes you wonder how economists get away with looking down on political scientists.
In Malcom Gladwell’s “Tipping Point,” Gladwell shows how the Broken Windows theory in NYC helped bring down the crime rate. Leavitt challenges that and says that throughout the country it was abortion that brought down the crime rate.
According to Gladwell because they are looking at different things (the country vs. NYC) and the fact that both phenomenon can be significant, they can both be right.
That’s another example of how this could be better handled by John Lott.
I was a bit lazy, but replication has two different meanings, both of which Lott misses.
The first is in the sense of science as a whole and that means that one should be able to replicate the findings in general–using other methods and data or not. So if you want to replicate a carbon dated bone, you could use c-14 or another material with a known half-life.
In social science replication there is some debate, but you should be able to use the same methods with new or old data to replicate a finding–and sometimes extensions of findings are called replication–so slightly different methodology.
Either way, Lott screws up the concept. When I got into a debate with him on this blog some time ago, he showed an incredible level of pig headedness even when I gave him a somewhat face saving out–sort of like watching a train wreck.
==I also read last year that economists didn’t have to submit their data sets for peer review — even for prestigious journals. It kind of makes you wonder how economists get away with looking down on political scientists
It depends–political scientists have been better about making data sets available, but economists do have some specific problems—if they get a data share agreement with a company or such, they may not be free to disseminate the data. In a similar vein, some comparativists argued that forced availability of data was unfair to them because then they invested the research money and time and anyone could come along and use their data.
I think it’s best to make the data available, but also understand there are cases where that isn’t realistic. And certainly, not all political science journals require it yet.
I think your example is very good. Lott could have been gracious, declared victory and said that new models will always evolve and he was glad for his role in furthering research. Everyone would have been okay (well other than the issues of some of the inexplicable data).
I think a lot of Leavitt’s work is meant to be provocative than serious–though he’s incredibly talented and friggen brilliant. In many cases the points he makes suffer the same problems as Lott’s work in trying to deal with complex macro issues with relatively simple models. Really interesting, but perhaps not the most concrete manner to answer questions. And I think he knows that…