2003

Lotta Nonsense

The question John Lott won’t answer is why do a survey incapable of providing valid and reliable results for the question he is trying to ask. Lott has been shopping around the number that 98% of DGU (Defensive Gun Uses) occur without a shot being fired. This is significantly different than the results of others studies. No other study shows a figure of less than 21% of DGUs involving the weapon being fired. I’ve asked him three times and he has yet to provide a substantive response.

If one thinks there is a major methodological flaw that is inflating the rate of DGUs with a discharge, one would design a study that improved upon previous designs. His does not. It offers an instrument that is rudimentary, sampling methods that are suspect to be generous, and a sample size smaller than at least one previous study. In addition, the execution of both the first survey and second survey can kindly be called amateurish. The question I have asked him three times is how does his study hope to improve upon the 1995 paper by Kleck and Gertz. His only on topic response was:

There are significant problems with using a five year window. Sure it helps you get a bigger sample of defensive gun uses, but there is also a lot more error. For example, using five years is likely to results in respondents including cases that go back even further than five years. Answers to questions about what happened are also likely to contain more errors.

This is a terribly misleading response given that Kleck and Gertz’s 1995 piece looked at both 5 year and 1 year time frames because of the issues involved in 5 year recollections. In the single year time period, the Kleck and Gertz survey recorded 56 respondent DGUs and 68 Household DGUs, both numbers over twice the number in Lott’s first survey. Now in both surveys these samples are too small to make too strong of claims regarding the subsamples of DGUs. However, given Lott had 25 DGUs, his MOE is +/- 20 percent compared to Kleck and Gertz with +/- 13.4 percent (using individuals since Lott didn’t check on households–this is also problematic–see Lamber from the 12th). Even if Lott is foolish enough to try and claim both surveys as his sample size, he would end up with fewer cases than Kleck and Gertz. The newer survey being conducted has provided about 1015 responses and so if it is truly consistent with the mysterious first survey, the responses would only produce 10.5 more DGUs for a total of 35.5. This is still lower than the K & G survey sample of one year reports. And I made a mistake yesterday. Tim Lambert found the 1 year time frame produced identical rates for firing of the 5 year time frame respondents. Mea Culpa. As a note, Tim has been on this issue for far longer and while I’m hoping that I’m adding something, he has a far greater amount of information.
His other responses were complaining about what he feels is a misrepresenation by me:

As for the 2002 survey, a number of calls (form the surveyors end) were indeed randomly listened to by me. In all defensive gun uses, the surveyors were debriefed that night or the following morning about the call. All the respondents in these cases volunteered extensive details of what happened with the defensive gun use. None of the defensive gun uses recorded involved defensive uses by police. A couple of our surveyors had previous experience and I asked them to talk to the other surveyors before surveying began. As a result of call backs, over 50 percent of telephone numbers produced completed interviews.

Given the picture emerging of John Lott’s understanding of survey research, I’ll stand by my comment that the second survey isn’t an improvement and there was no effective supervision of callers. He does claim that a couple of the callers had some experience and they talked to the others. My, how comforting. Apparently he feels that his surveys are of the quality of the Kleck and Gertz survey, but can’t explain why.

John Lott is apparently uninterested in explaining how his survey is an improvement of better executed surveys in the past. Given this, one has to question whether Lott has a goal besides covering himself. Neither survey would be an improvement upon the Kleck and Gertz work or other works and so one is left with the uncomfortable feeling that John Lott is in the business of producing results that fit what he wants and not what he observes. The alternative is that he doesn’t understand the extent of the problem. Ignorance is not a flattering excuse in this case.

But to take this one step further, while Kleck and Gertz are limited in their one year conclusions, their survey is inadequate to make strong conclusions from and they point this out in their article. To add to the literature, Lott would need to not just meet that survey’s quality, he would need to improve upon it with a higher number of DGU respondents. Instead, he will have fewer, but he expects the public to accept the results. That’s chutzpah.

Jacob Levy added a comment regarding the incident. The IRB issues has been dismissed because the Chicago Law School apparently ignored such rules. This is disturbing, but arguably Lott was a part of a larger problem in that case. What is most disturbing, as I’ve said before, is that Lott had survey research with connecting information being stored and entered in dorm rooms. This isn’t a technical violation of the treatment of human subjects, this is a serious violation of how such material should be stored to protect human subjects’ privacy. Of all people, gun rights advocates should understand the importance of this.

In a larger sense, my views on gun control have largely been shaped by works like that of Kleck and Gertz. I don’t necessarily have a problem with concealed carry, I dislike HCI for many of the same reasons I dislike Lott, and I think guns should be generally available to the public though preferably with an FOID system as Illinois has. Why would some gun rights advocates work so hard to protect a charlatan when other credible researchers are out there that make solid arguments concerning the defensive use of guns that are overwhelmingly positive for gun rights advocates? Clearly, not all gun rights advocates take Lott seriously, but it seems that he is considered by some advocates as more credible and impressive than people like Kleck. I don’t get it.

Green Energy Bill up in the Lege

Rich Miller at the Capitol Fax reports:

"GREEN" ENERGY BILL COMING Crain’s Chicago Business reported over the weekend that a bill will be introduced soon to require the state’s electric utilities to purchase a portion of their power from renewable sources.

The bill, which will be sponsored by Sen. Pat Welch (D-Peru), will force electric companies to buy 5 percent of their power from renewable sources by 2010, and 15 pecent by 2020. The utilities question the reliability and availability of alternative sources like wind power and say it costs more. But proponents claim the cost is about the same for wind and natural gas (3 cents per kilowatt-hour for gas and 3 to 4 cents for wind), and say the initial 5 percent requirement is sufficiently modest. Several companies are hoping to build "wind farms" in central Illinois, according to the article, but the investments may not be made without the legislation.

This is great news. One of the critical aspects of encouraging green energy is creating a market for it. This would provide for competition between such suppliers and in the short description, ultimately make it affordable.

Lott’s Response Today

From John Lott:

Here is a suggestion. If you have a question about whether something was done in the survey, it might make more sense to ask the question whether it was done rather than asserting that it must not be true. I had simply asked James to write up specific points, particularly how the survey sample was gathered. The discussion that I sent you dealt with the issues that had previously been raised.

There are significant problems with using a five year window. Sure it helps you get a bigger sample of defensive gun uses, but there is also a lot more error. For example, using five years is likely to results in respondents including cases that go back even further than five years. Answers to questions about what happened are also likely to contain more errors.

So Lott wants to claim that his survey ‘improves upon’ Kleck and Gertz (1995) because he limits it to one year. That might have some merit if he used the same or better survey techniques and his sampling was anything close to the quality of Kleck and Gertz’s.

He didn’t. In fact, the Kleck and Gertz survey called back to verify results on all DGUs, instituted several criteria to ensure accuracy in relation to actual civilian defensive use, and significant screening was used to determine the nature of the reported DGU unit.

Anytime anyone is asked to report a memory, the farther back the event, the lest trustworthy the memory. Lott is correct in this and this is why one would use probe the memory and attempt to establish credibility. Additionally, self-reporting can often inflate the reports, something Kleck was very concerned with, something Lott, despite protestations of being provided extensive details by all DGU respondents, did not do in the instrument.

Of course, what Lott is not mentioning and is highly relevant, is that Kleck and Gertz asked about DGUs within the last year as well. In fact, they have more cases of that subsample than does Lott per Table 2 (scroll down). Kleck and Gertz don’t break down the one year rate of a weapon being fired, but it is reported as being higher than in the 5 year reports. Of course, the sample size is small and has a larger maring of error than the 222 total (or 213 depending on which sample they are discussing). That MOE is still smaller than Lott’s and it is from a far better constructed survey.

For a good discussion of the issues surrounding 1 or 5 year samples see Kleck and Gertz. Lott’s complaint about the 5 year window is especially curious in this case given Kleck and Gertz explicitly cover the issue in detail and account for it.

As for the 2002 survey, a number of calls (form the surveyors end) were indeed randomly listened to by me. In all defensive gun uses, the surveyors were debriefed that night or the following morning about the call. All the respondents in these cases volunteered extensive details of what happened with the defensive gun use. None of the defensive gun uses recorded involved defensive uses by police. A couple of our surveyors had previous experience and I asked them to talk to the other surveyors before surveying began. As a result of call backs, over 50 percent of telephone numbers produced completed interviews.

In a pattern that is becoming all too apparent, Lott tries to equate a fly by the seat of your pants approach to doing his survey to having trained surveyors.

This speaks to Lott’s poor understanding of surveys. Even individuals who do a lot of survey research generally rely upon survey experts to conduct and help construct the surveys. The relevant passages in Kleck and Gertz are:

The present survey is the first survey ever devoted to the subject of armed self-defense. It was carefully designed to correct all of the known correctable or avoidable flaws of previous surveys which critics have identified. We use the most anonymous possible national survey format, the anonymous random digit dialed telephone survey. We did not know the identities of those who were interviewed, and made this fact clear to the Rs. We interviewed a large nationally representative sample covering all adults, age eighteen and over, in the lower forty-eight states and living in households with telephones. [42] We asked DGU questions of all Rs in our sample, asking them separately about both their own DGU experiences and those of other members of their households. We used both a five year recall period and a one year recall period. We inquired about uses of both handguns and other types of guns, and excluded occupational uses of guns and uses against animals. Finally, we asked a long series of detailed questions designed to establish exactly what Rs did with their guns; for example, if they had confronted other humans, and how had each DGU connected to a specific crime or crimes.

We consulted with North America’s most experienced experts on gun-related surveys, David Bordua, James Wright, and Gary Mauser, along with survey expert Seymour Sudman, in order to craft a state-of-the-art survey instrument designed specifically to establish the frequency and nature of DGUs. [43] A professional telephone polling firm, [Page 161] Research Network of Tallahassee, Florida, carried out the sampling and interviewing. Only the firm’s most experienced interviewers, who are listed in the acknowledgements, were used on the project. Interviews were monitored at random by survey supervisors. All interviews in which an alleged DGU was reported by the R were validated by supervisors with call-backs, along with a 20% random sample of all other interviews. Of all eligible residential telephone numbers called where a person rather than an answering machine answered, 61% resulted in a completed interview. Interviewing was carried out from February through April of 1993.

The quality of sampling procedures was well above the level common in national surveys. Our sample was not only large and nationally representative, but it was also stratified by state. That is, forty-eight independent samples of residential telephone numbers were drawn, one from each of the lower forty- eight states, providing forty-eight independent, albeit often small, state samples. Given the nature of randomly generated samples of telephone numbers, there was no clustering of cases or multistage sampling as there is in the NCVS; [44] consequently, there was no inflation of sampling error due to such procedures. To gain a larger raw number of sample DGU cases, we oversampled in the south and west regions, where previous surveys have indicated gun ownership is higher. [45] We also oversampled within contacted households for males, who are more likely to own guns and to be victims of crimes in which victims might use guns defensively. [46] Data were later weighted to adjust for oversampling.

Each interview began with a few general "throat-clearing" questions about problems facing the R’s community and crime. The interviewers then asked the following question: "Within the past five years, have you yourself or another member of your household used a gun, even if it was not fired, for self-protection or for the protection of property at home, work, or elsewhere? Please do not include military service, police work, or work as a security guard." Rs who answered "yes" were then asked: "Was this to protect against an animal or a person?" Rs who reported a DGU against a person were asked: "How many incidents involving defensive uses of guns against persons happened to members of your household in the past five years?" and "Did this incident [any of these incidents] happen in the past twelve [Page 162] months?" At this point, Rs were asked "Was it you who used a gun defensively, or did someone else in your household do this?"

All Rs reporting a DGU were asked a long, detailed series of questions establishing exactly what happened in the DGU incident. Rs who reported having experienced more than one DGU in the previous five years were asked about their most recent experience. When the original R was the one who had used a gun defensively, as was usually the case, interviewers obtained his or her firsthand account of the event. When the original R indicated that some other member of the household was the one who had the experience, interviewers made every effort to speak directly to the involved person, either speaking to that person immediately or obtaining times and dates to call back. Up to three call- backs were made to contact the DGU-involved person. We anticipated that it would sometimes prove impossible to make contact with these persons, so interviewers were instructed to always obtain a proxy account of the DGU from the original R, on the assumption that a proxy account would be better than none at all. It was rarely necessary to rely on these proxy accounts– only six sample cases of DGUs were reported through proxies, out of a total of 222 sample cases.

While all Rs reporting a DGU were given the full interview, only a one-third random sample of Rs not reporting a DGU were interviewed. The rest were simply thanked for their help. This procedure helped keep interviewing costs down. In the end, there were 222 completed interviews with Rs reporting DGUs, another 1,610 Rs not reporting a DGU but going through the full interview by answering questions other than those pertaining to details of the DGUs. There were a total of 1,832 cases with the full interview. An additional 3,145 Rs answered only enough questions to establish that no one in their household had experienced a DGU against a human in the previous five years (unweighted totals). These procedures effectively undersampled for non-DGU Rs or, equivalently, oversampled for DGU-involved Rs. Data were also weighted to account for this oversampling.

Questions about the details of DGU incidents permitted us to establish whether a given DGU met all of the following qualifications for an incident to be treated as a genuine DGU: (1) the incident involved defensive action against a human rather than an animal, but not in connection with police, military, or security guard duties; (2) the incident involved actual contact with a person, rather than merely investigating suspicious circumstances, etc.; (3) the defender could state a specific crime which he thought was being committed at the time of the incident; (4) the gun was actually used in some way–at a minimum it had to be used as part of a threat against a person, either by [Page 163] verbally referring to the gun (e.g., "get away–I’ve got a gun") or by pointing it at an adversary. We made no effort to assess either the lawfulness or morality of the Rs’ defensive actions.

An additional step was taken to minimize the possibility of DGU frequency being overstated. The senior author went through interview sheets on every one of the interviews in which a DGU was reported, looking for any indication that the incident might not be genuine. A case would be coded as questionable if even just one of four problems appeared: (1) it was not clear whether the R actually confronted any adversary he saw; (2) the R was a police officer, member of the military or a security guard, and thus might have been reporting, despite instructions, an incident which occurred as part of his occupational duties; (3) the interviewer did not properly record exactly what the R had done with the gun, so it was possible that he had not used it in any meaningful way; or (4) the R did not state or the interviewer did not record a specific crime that the R thought was being committed against him at the time of the incident. There were a total of twenty-six cases where at least one of these problematic indications was present. It should be emphasized that we do not know that these cases were not genuine DGUs; we only mean to indicate that we do not have as high a degree of confidence on the matter as with the rest of the cases designated as DGUs. Estimates using all of the DGU cases are labelled herein as "A" estimates, while the more conservative estimates based only on cases devoid of any problematic indications are labelled "B" estimates.

The question remains, what does John Lott think his survey is going to produce of value? Comparing the two methods of conducting a quality survey identifies how unconcerned Lott is with his research’s accuracy.

Lotta Confusion

First, clearly I was confused on the categorizations of variables. His response makes it clear how one gets to 36. Mea Culpa. Second, he has released tax records, though it is impossible to garner any useful information from them as to whether a survey was conducted. Conveniently, as with any other direct evidence of the survey, all supporting documentation is gone.

It appears that I significantly overestimated the amount of time for each call as well. The reason for this? I assumed the survey was at least attempting to be of some quality. For some context, one should examine the methodology of Kleck and Gertz’s (1995) survey found in Section C-1 of the paper.

So in the first case, Lott paid and accepted volunteer efforts by some undergraduates in performing a survey that:

1) was performed in dorm rooms
2) used untrained personnel
3) used mediocre sampling methods based on a commercial CD
4) had no Call-Backs for verification
5) had no effective supervision
6) had no questions regarding the context of the attack in relation to civilian versus police distinctions
7) had a question at the end whether the interviewer trusted the respondent with no objective guidelines
The list could go on…

Why was this survey ever done? It clearly couldn’t improve upon work already done and published by Kleck and Gertz in 1995. There are reasons to attempt different surveys after the work was done by another researcher.

1) Situation has changed.

Not relevant in the two years since Kleck and Gertz.

2) Replicate the results

Given how primitive the survey instrument and methodology are Lott could have no real hope of replicating Kleck and Gertz.

3) Improve on the findings by using some sort of innovation.

Again, this would rely on the survey using an alternative to the original study that could leverage more information out of respondents. Lott’s survey would not do that in any way.

So why do it? Good question. Not only did Lott use his own money, he wasted it to the tune of several thousand dollars. It would be impossible to account for the actual costs because:
a) Lott has no effective records of paying anyone. He paid some $8,000, but there isn’t any supporting documentation
b) apparently accepted students as volunteers and employees
c) no one can locate a student who did the research

Even better, Lott decided to replicate the missing survey using AEI interns in the same slipshod manner except this time the calls were made at AEI. He says the results will be available in his upcoming book and the sample size is ~1000. Other than covering himself, this survey serves no purpose.

In his first survey the sample of DGUs was 25 meaning a margin of error of +/- 20 percent. Compared to the Kleck and Gertz survey where oversampling produced 222 DGUs this is assanine. Their margin of error would be +/- 6.7 percent within the DGUs.

In responding to me:

Between the two surveys 3,439 people have been interviewed. Can more
information be obtained? Sure, but given my personal resources and
that these surveys are such a trivial portion of my overall interests I have spent about as much time as I plan on spending on this issue.

Actually, without the first survey, around 1,000 people have been surveyed. Lott appears to think that his previous results are worth something because they reside in his memory. I would point out that without the data, he won’t be able to use those results at all and given he doesn’t even have the original instrument, one can’t even tell if he was asking the same thing.

Ultimately, there is no point in this survey if he wishes to establish the rate of firearm use versus brandishing. Any claim to the contrary is statistical malpractice. Even more troubling, is that Lott is apparently disputing the results of well done surveys by others without even bothering to do a survey that might improve upon their results.

As Kieran Healy pointed out the other day, it seems quite clear one shouldn’t be going around spouting a number when all of the underlying research is now gone. To make matters worse, the data is utterly meaningless in regards to the 98% matter so why would one repeat it?

If you take Lott at his word and the first survey was done, I’m not sure that really adds to his credibility. It might change the reason for one’s conclusion that he isn’t credible. In one case he would be a fraud, in the other he could be an absolute incompetent or just someone selling a bunch of malarky.

Lotta Survey

Here is the survey and descriptive material John Lott passed along:

D. Survey on Defensive Gun Use

Below is the survey that was used to identify the rate of defensive gun
use.

Hello, my name is _______, and I am a student at ________ working on a
very brief survey on crime. The survey should take about one minute.
Could I please ask you a few questions?

1) During the last year, were you ever threatened with physical
violence or harmed by another person or were you present when someone else faced such a situation?

(Threats do not have to be spoken threats. Includes physically
menacing. Attacks include an assault, robbery or rape.)

a) Yes
b) No
c) Uncertain
d) Declined to answer

(Just ask people "YES" or "NO." If they answer "NO" or "Decline to
answer," go directly to demographic questions. If people are
"Uncertain" or say ³YES,² proceed with question 2.)

2) How many times did these threats of violence or crimes occur?
_____

3) Which of the following best describe how you responded to the
threat(s)or crime(s)? Pick one from the following list that best described your behavior or the person who you were with for each case faced.

a) behaved passively
b) used your fists
c) ran away
d) screamed or called for help
e) used a gun
f) used a knife
g) used mace
h) used a baseball bat or club
i) other

(Rotate these answers (a) through (h), place a number for 0 to whatever
for each option. Stop going through list if they volunteer answer(s) that account for the number of threats that they faced.)

4) This is only done if they respondent answers "e" (a gun) to question
3

If a gun was used, did you or the other person you were with:

a) brandish it
b) fire a warning shot
c) fire at the attacker
d) injure the attacker
e) kill the attacker

(Again, place a number for 0 to whatever number is appropriate for each
option. Rotate answers.)

5) Were you or the person you were with harmed by the attack(s)?

a) Yes
b) No
c) Refused to answer

(We obviously have the area code for location, write down sex from the
voice if possible, otherwise ask.)

Two demographic questions asked of all participants.

What is your race? black, white, Hispanic, Asian, Other.

What is your age by decade? 20s, 30s, 40s, so on.

Question for surveyor: Is there any reason for you to believe that
the person was not being honest with you?

a) Didn’t believe respondent at all
b) Had some concerns
c) Had no serious concerns

Write up by James Knowles of the discussion of the survey:

We had a small army of interns and AEI staff making phone calls. The
callers for any given night varied according to who was available/willing to make phone calls. I was here every night supervising from my office at AEI. The survey was conducted over eight nights. Calls were made between 7pm and 9pm local time. (deleted material about workers that isn’t relevant)

We used a phonebook program from a company called infoUSA the program
was called Select Phone Pro version 2.4. The program has a random function. First, we calculated how many numbers should be drawn from each state, we decided that we would pull 4,000 numbers (based on how much the
PhonePro program gives us for free). Then we took the populations of each state from the Census and assigned the quantity of numbers that we indended to get from the program. Then, in PhonePro, we picked a state, then sorted the state’s list by area code, then randomly generated a number (using excel’s analysis pack) as a starting point, then the Phone Pro program would export a number every so often from the list until it reached the desired number of listings exported from the state. This may be something that is easier explained in a conversation, my direct line is xxx.

Lotta Response

Lott responds to the previous posts in e-mail. I’ll post it in its entirety out of fairness and address the issues in later posts:

My responses:

I have already discussed these issues, but it is obvious that I need to
repeat what I have previously sent out.

I have previously posted the survey questions used (see below for
another copy). The 36 demographic categories for the 1997 survey were the exact same ones that I used in all the regressions in my book More Guns, Less Crime. The breakdown is by age (six age categories — 10 to 19, 20 to 29,etc.), by sex, and by race (black, white, other). Information on where the person lived is immediately available because we have the person’s area code and address from the telephone CD that we used. As to demographic questions, you will see that we asked two race and age. The student conducting the survey would fill in sex on their own unless there was a question.

"This adds another significant amount as even a simple database for
2400 respondents would take time to construct–especially with 36
categories."

This is simply incorrect. The demographic information is a product of
only four variables. First you merge the census demographic and population information with the survey file. Since the 36 demographic categories are afunction of only three answers to the survey, a simple set of "If, then" statements tell you in which of the 36 categories each respondent falls in.

That quickly gives you the share of that states population represented
by people who answered the survey in that demographic group, and you also
figure out that state/demographic group’s share of the national
population.

"10 minutes a call (this is charitable)"

When you look at the survey you will see that it is very short. Well
over 90 percent of the people would answer no to the first question and then only have the two demographic questions to answer. In this case, the survey would only take about thirty seconds or so. Even for those who answered "yes" only a fraction would have to answer all the other questions.

Overall, only about one percent of those surveyed would be asked all
the questions in the survey and even then the entire survey would only take a couple of minutes.

The 2002 survey of over one thousand respondents was completed over
just eight nights. Students were often able to survey over 20 people per
hour.

As to deducting these costs on my income taxes, my 1997 tax form, which
I have shared with many others, shows that $8,750 was deducted for
research assistants (the heading was under "legal and professional services").

We do not keep the supporting documents past the three years required by the IRS and the $8,750 does include the expenses for other projects. On the other hand, I am sure that I did not keep track of all of my expenditures so the $8,750 is a sizeable underestimate of what I spent.

The survey telephone numbers were obtained from a CD directory for the
entire United States. The numbers were selected randomly so that all
states were represented in proportion to their population. (See attached
below for a very similar discussion relating to the 2002 survey.)

Between the two surveys 3,439 people have been interviewed. Can more
information be obtained? Sure, but given my personal resources and
that these surveys are such a trivial portion of my overall interests I have spent about as much time as I plan on spending on this issue.

Detailed information on the survey will be provided in my book that is
due out the end of March.

So did Lott do a survey?

I couldn’t tell you and I’m wondering if Lott could. Ted Barlow puts the question to Jane Galt whether Lott has been vindicated.

There are severarl implausible items in Lott’s defense. As I mentioned earlier, in my real life I’ve lost data to conference papers. That is a bit different than this case, but I’m willing to allow that Lott is the absent minded professor (well researcher) in the extreme. Academia attracts flakes and maybe he is uber flake (before keying on this issue, I consider myself at least a minor flake).

There seems to be several possibilities about what actually occurred.
1) Lott’s story is largely true, but probably inaccurate at some points because of his absent mindedness.

This isn’t particularly flattering to him because it demonstrates a large degree of sloppiness in conducting research and especially the IRB issues are troubling to me. I think this fits with other observations of his work such at the new article Kieran Healy cites that criticizes Lott and Mustard. I have made many of the same criticisms in a less detailed way of Lott’s work (not on the blog sorry) and I find the work sloppy. Sloppy work is one thing, but Lott doesn’t seem to have any sense of humility about his work.

2) Lott completely made up the survey instead of admitting a dumb mistake.

Possible, but it seems like this is a much bigger lie than he needed to make. I guess, I’m willing to give him the benefit of the doubt barring specific evidence of the entire thing being a lie.

3) Lott did pay students to do a bunch of stuff, but kept such lax oversight of the project, the students made up a bunch of garbage and he bought it.

Again, not a flattering perspective of Lott’s acumen as a researcher, but entirely possible. Some of Lott’s comments regarding surveys indicate he doesn’t really grasp the whole process and as such it might be easy to pull one over on him.

4) Lott is exagerating the survey and once he did it the first time, he kept on doing it. Yes this is lying, but the degree is separable from just not doing any of it.

Regardless of whether one of the above is true or some other option, the episode is telling of Lott’s view of research and his committment to research design and methodology.

I guess what I find most troubling are reasonably intelligent gun rights advocates throwing in with Lott. Even if Lott is wrong, there is little evidence that Concealed Carry increases crime (at least well regulated concealed carry). This is pointed out in the article Kieran cites that takes Lott to task. If Lott demonstrated one thing clearly, concealed carry doesn’t increase crime like Handgun Control, Inc. argues and that is important to know.

Even more troubling is the Kleck has done much better work. No work is perfect, but Kleck seems to have tried to do honest well designed research into DGUs and yet John Lott’s numbers seem to be overly important in the discussion. There have been some issues taken with Kleck’s work because he uses self-reported data, but that crticism is far different and far less troubling than the issues with Lott’s work. If Kleck’s estimates are problematic, it is due to problems of collecting quality data even with careful planning. Lott is careless in the most basic tasks of his efforts.

Finally, Lott’s new book is coming out by Regnery? Oh come on….