Kieran Healy responds to Kevin Drum’s question regarding whether Lott’s results could have happened.

Kieran is correct in that weighting could plausibly have created the results. If one weights the respondents, all sorts of oddities can arise in the total numbers and with only two firing a gun as a DGU, then wacky results aren’t that unlikely. Of course, they are utterly useless. As I mentioned early the Margin of Error in sample of 25 DGU would be +/- 20% assuming the survey was flawless in every other way.

As Kieran points out, the chances of the survey being of any quality rapidly approach zero and it isn’t even a good bet that the survey is close to representative given the half ass manner in which it was administered.

Kieran is far too nice when he discusses the issue of DGU’s being a reasonable number. As mentioned below, the Margin of Error on such a sample would be +/- 20%. IOW, the results are utterly meaningless. Of course, that is even generous because that works on the assumption that the sample is representative. Given Lott’s inability to answer basic questions about survey reliability and validity, that is highly unlikely.

Lott claims his new survey will solve the problems, but that assumes he oversamples DGU’s. If he doesn’t, his finding will be as pointless as the above.

What is really fascinating is how much information he claims to have collected. Here is a bit from Lambert’s page:

6) weighting the sample
I did not weight the sample by household size but used the state level age,race, and sex data that I had used in the rest of my book. There where 36categories by state. Lindgren hypotheses why you can get such small weights for some people and I think that this fine of a breakdown easily explains it. I don¹t remember who answered what after all these years, but suppose someone who fired a gun was a elderly black in Utah or Vermont.

So he collected 36 demographic variables in his survey? Or are there 36 breakdowns of a smaller number of demographics collected? Either way this was a pretty involved survey that had to take a lot of time on the phone to collect all that data. Surveying isn’t nearly as quick as it sounds and collecting that many distinct variables plus gun use information would entail significant amounts of time on the phone.

Using a professional center I’m guessing he would have looked at least $20,000. Even using cheaper labor (less reliable), this had to cost a lot of money. Gathering that much demographic data with untrained surveyors without the benefit of computer assisted surveying techniques would likely be 10 minutes a call (this is charitable). At 2400 respondents that is 400 hours of labor not including non-responders and no answers. At $7/hr this is $2800 alone, plus charitably the long distance of $1200 at $.05 a minute. So he paid $4,000 out of his own pocket?

Does he really think that anyone buys that he didn’t at least deduct this expense from his taxes? The costs above are a bare minimum and the out of pocket costs would have been twice that just to obtain the sample, let alone code it and enter it. Of course, according to Lott,

Lindgren has the ³impression² that the students entered the data on sheets.I do not directly recall this part of our conversation, but I would have said that both were done.

This adds another significant amount as even a simple database for 2400 respondents would take time to construct–especially with 36 categories.

Of course, all of this should have been overseen by a competent researcher, which Lott appears not to be. The stunning thing is that if it is true Lott did all of this, he wasted a fair amount of money on a useless survey that lacked even rudimentary controls to ensure a sample of any quality.

It may be that Lott didn’t just make up the number, but he might as well have. One can complain all they want about how this is only about one sentence. Strictly speaking it is, but it also shows an incredible lack of understanding about data collection and analysis. His other results aren’t very impressive once one looks at the standard errors anyway. I am beginning to wonder if Lott uses these side issues to avoid discussing his actual results. By focusing on claims of fraud and providing just enough information to be plausible, no one ever gets around to questioning the actual theory and statistical results.

The IRB question is very interesting. Often, a survey center deals with IRB issues and so the researcher doesn’t have to worry. In this case, I believe Lott should have filed a report. In fact, his careless control over data demonstrates a good reason for IRB. According to him, the results tied to specific individuals data were lying around dorm rooms. That isn’t a professional or acceptable manner of ensuring repsondent confidentiality.

UP DATE: I just looked up the figures for some phone surveys done professionally and a 10 minute survey for 1200 people would run around $20000. Meaning that even with some economies of scale, Lott’s survey (assuming 10 minutes was adequate) would come in above $30,000. Now one can claim that they can do it cheaper on the fly, but with basic costs I’ll allow that to only be about half of a professional center’s cost and that is at least $15,000.

Leave a Reply

Your email address will not be published. Required fields are marked *