January 2003

Lotta Confusion

First, clearly I was confused on the categorizations of variables. His response makes it clear how one gets to 36. Mea Culpa. Second, he has released tax records, though it is impossible to garner any useful information from them as to whether a survey was conducted. Conveniently, as with any other direct evidence of the survey, all supporting documentation is gone.

It appears that I significantly overestimated the amount of time for each call as well. The reason for this? I assumed the survey was at least attempting to be of some quality. For some context, one should examine the methodology of Kleck and Gertz’s (1995) survey found in Section C-1 of the paper.

So in the first case, Lott paid and accepted volunteer efforts by some undergraduates in performing a survey that:

1) was performed in dorm rooms
2) used untrained personnel
3) used mediocre sampling methods based on a commercial CD
4) had no Call-Backs for verification
5) had no effective supervision
6) had no questions regarding the context of the attack in relation to civilian versus police distinctions
7) had a question at the end whether the interviewer trusted the respondent with no objective guidelines
The list could go on…

Why was this survey ever done? It clearly couldn’t improve upon work already done and published by Kleck and Gertz in 1995. There are reasons to attempt different surveys after the work was done by another researcher.

1) Situation has changed.

Not relevant in the two years since Kleck and Gertz.

2) Replicate the results

Given how primitive the survey instrument and methodology are Lott could have no real hope of replicating Kleck and Gertz.

3) Improve on the findings by using some sort of innovation.

Again, this would rely on the survey using an alternative to the original study that could leverage more information out of respondents. Lott’s survey would not do that in any way.

So why do it? Good question. Not only did Lott use his own money, he wasted it to the tune of several thousand dollars. It would be impossible to account for the actual costs because:
a) Lott has no effective records of paying anyone. He paid some $8,000, but there isn’t any supporting documentation
b) apparently accepted students as volunteers and employees
c) no one can locate a student who did the research

Even better, Lott decided to replicate the missing survey using AEI interns in the same slipshod manner except this time the calls were made at AEI. He says the results will be available in his upcoming book and the sample size is ~1000. Other than covering himself, this survey serves no purpose.

In his first survey the sample of DGUs was 25 meaning a margin of error of +/- 20 percent. Compared to the Kleck and Gertz survey where oversampling produced 222 DGUs this is assanine. Their margin of error would be +/- 6.7 percent within the DGUs.

In responding to me:

Between the two surveys 3,439 people have been interviewed. Can more
information be obtained? Sure, but given my personal resources and
that these surveys are such a trivial portion of my overall interests I have spent about as much time as I plan on spending on this issue.

Actually, without the first survey, around 1,000 people have been surveyed. Lott appears to think that his previous results are worth something because they reside in his memory. I would point out that without the data, he won’t be able to use those results at all and given he doesn’t even have the original instrument, one can’t even tell if he was asking the same thing.

Ultimately, there is no point in this survey if he wishes to establish the rate of firearm use versus brandishing. Any claim to the contrary is statistical malpractice. Even more troubling, is that Lott is apparently disputing the results of well done surveys by others without even bothering to do a survey that might improve upon their results.

As Kieran Healy pointed out the other day, it seems quite clear one shouldn’t be going around spouting a number when all of the underlying research is now gone. To make matters worse, the data is utterly meaningless in regards to the 98% matter so why would one repeat it?

If you take Lott at his word and the first survey was done, I’m not sure that really adds to his credibility. It might change the reason for one’s conclusion that he isn’t credible. In one case he would be a fraud, in the other he could be an absolute incompetent or just someone selling a bunch of malarky.

Lotta Survey

Here is the survey and descriptive material John Lott passed along:

D. Survey on Defensive Gun Use

Below is the survey that was used to identify the rate of defensive gun
use.

Hello, my name is _______, and I am a student at ________ working on a
very brief survey on crime. The survey should take about one minute.
Could I please ask you a few questions?

1) During the last year, were you ever threatened with physical
violence or harmed by another person or were you present when someone else faced such a situation?

(Threats do not have to be spoken threats. Includes physically
menacing. Attacks include an assault, robbery or rape.)

a) Yes
b) No
c) Uncertain
d) Declined to answer

(Just ask people "YES" or "NO." If they answer "NO" or "Decline to
answer," go directly to demographic questions. If people are
"Uncertain" or say ³YES,² proceed with question 2.)

2) How many times did these threats of violence or crimes occur?
_____

3) Which of the following best describe how you responded to the
threat(s)or crime(s)? Pick one from the following list that best described your behavior or the person who you were with for each case faced.

a) behaved passively
b) used your fists
c) ran away
d) screamed or called for help
e) used a gun
f) used a knife
g) used mace
h) used a baseball bat or club
i) other

(Rotate these answers (a) through (h), place a number for 0 to whatever
for each option. Stop going through list if they volunteer answer(s) that account for the number of threats that they faced.)

4) This is only done if they respondent answers "e" (a gun) to question
3

If a gun was used, did you or the other person you were with:

a) brandish it
b) fire a warning shot
c) fire at the attacker
d) injure the attacker
e) kill the attacker

(Again, place a number for 0 to whatever number is appropriate for each
option. Rotate answers.)

5) Were you or the person you were with harmed by the attack(s)?

a) Yes
b) No
c) Refused to answer

(We obviously have the area code for location, write down sex from the
voice if possible, otherwise ask.)

Two demographic questions asked of all participants.

What is your race? black, white, Hispanic, Asian, Other.

What is your age by decade? 20s, 30s, 40s, so on.

Question for surveyor: Is there any reason for you to believe that
the person was not being honest with you?

a) Didn’t believe respondent at all
b) Had some concerns
c) Had no serious concerns

Write up by James Knowles of the discussion of the survey:

We had a small army of interns and AEI staff making phone calls. The
callers for any given night varied according to who was available/willing to make phone calls. I was here every night supervising from my office at AEI. The survey was conducted over eight nights. Calls were made between 7pm and 9pm local time. (deleted material about workers that isn’t relevant)

We used a phonebook program from a company called infoUSA the program
was called Select Phone Pro version 2.4. The program has a random function. First, we calculated how many numbers should be drawn from each state, we decided that we would pull 4,000 numbers (based on how much the
PhonePro program gives us for free). Then we took the populations of each state from the Census and assigned the quantity of numbers that we indended to get from the program. Then, in PhonePro, we picked a state, then sorted the state’s list by area code, then randomly generated a number (using excel’s analysis pack) as a starting point, then the Phone Pro program would export a number every so often from the list until it reached the desired number of listings exported from the state. This may be something that is easier explained in a conversation, my direct line is xxx.

Lotta Response

Lott responds to the previous posts in e-mail. I’ll post it in its entirety out of fairness and address the issues in later posts:

My responses:

I have already discussed these issues, but it is obvious that I need to
repeat what I have previously sent out.

I have previously posted the survey questions used (see below for
another copy). The 36 demographic categories for the 1997 survey were the exact same ones that I used in all the regressions in my book More Guns, Less Crime. The breakdown is by age (six age categories — 10 to 19, 20 to 29,etc.), by sex, and by race (black, white, other). Information on where the person lived is immediately available because we have the person’s area code and address from the telephone CD that we used. As to demographic questions, you will see that we asked two race and age. The student conducting the survey would fill in sex on their own unless there was a question.

"This adds another significant amount as even a simple database for
2400 respondents would take time to construct–especially with 36
categories."

This is simply incorrect. The demographic information is a product of
only four variables. First you merge the census demographic and population information with the survey file. Since the 36 demographic categories are afunction of only three answers to the survey, a simple set of "If, then" statements tell you in which of the 36 categories each respondent falls in.

That quickly gives you the share of that states population represented
by people who answered the survey in that demographic group, and you also
figure out that state/demographic group’s share of the national
population.

"10 minutes a call (this is charitable)"

When you look at the survey you will see that it is very short. Well
over 90 percent of the people would answer no to the first question and then only have the two demographic questions to answer. In this case, the survey would only take about thirty seconds or so. Even for those who answered "yes" only a fraction would have to answer all the other questions.

Overall, only about one percent of those surveyed would be asked all
the questions in the survey and even then the entire survey would only take a couple of minutes.

The 2002 survey of over one thousand respondents was completed over
just eight nights. Students were often able to survey over 20 people per
hour.

As to deducting these costs on my income taxes, my 1997 tax form, which
I have shared with many others, shows that $8,750 was deducted for
research assistants (the heading was under "legal and professional services").

We do not keep the supporting documents past the three years required by the IRS and the $8,750 does include the expenses for other projects. On the other hand, I am sure that I did not keep track of all of my expenditures so the $8,750 is a sizeable underestimate of what I spent.

The survey telephone numbers were obtained from a CD directory for the
entire United States. The numbers were selected randomly so that all
states were represented in proportion to their population. (See attached
below for a very similar discussion relating to the 2002 survey.)

Between the two surveys 3,439 people have been interviewed. Can more
information be obtained? Sure, but given my personal resources and
that these surveys are such a trivial portion of my overall interests I have spent about as much time as I plan on spending on this issue.

Detailed information on the survey will be provided in my book that is
due out the end of March.

So did Lott do a survey?

I couldn’t tell you and I’m wondering if Lott could. Ted Barlow puts the question to Jane Galt whether Lott has been vindicated.

There are severarl implausible items in Lott’s defense. As I mentioned earlier, in my real life I’ve lost data to conference papers. That is a bit different than this case, but I’m willing to allow that Lott is the absent minded professor (well researcher) in the extreme. Academia attracts flakes and maybe he is uber flake (before keying on this issue, I consider myself at least a minor flake).

There seems to be several possibilities about what actually occurred.
1) Lott’s story is largely true, but probably inaccurate at some points because of his absent mindedness.

This isn’t particularly flattering to him because it demonstrates a large degree of sloppiness in conducting research and especially the IRB issues are troubling to me. I think this fits with other observations of his work such at the new article Kieran Healy cites that criticizes Lott and Mustard. I have made many of the same criticisms in a less detailed way of Lott’s work (not on the blog sorry) and I find the work sloppy. Sloppy work is one thing, but Lott doesn’t seem to have any sense of humility about his work.

2) Lott completely made up the survey instead of admitting a dumb mistake.

Possible, but it seems like this is a much bigger lie than he needed to make. I guess, I’m willing to give him the benefit of the doubt barring specific evidence of the entire thing being a lie.

3) Lott did pay students to do a bunch of stuff, but kept such lax oversight of the project, the students made up a bunch of garbage and he bought it.

Again, not a flattering perspective of Lott’s acumen as a researcher, but entirely possible. Some of Lott’s comments regarding surveys indicate he doesn’t really grasp the whole process and as such it might be easy to pull one over on him.

4) Lott is exagerating the survey and once he did it the first time, he kept on doing it. Yes this is lying, but the degree is separable from just not doing any of it.

Regardless of whether one of the above is true or some other option, the episode is telling of Lott’s view of research and his committment to research design and methodology.

I guess what I find most troubling are reasonably intelligent gun rights advocates throwing in with Lott. Even if Lott is wrong, there is little evidence that Concealed Carry increases crime (at least well regulated concealed carry). This is pointed out in the article Kieran cites that takes Lott to task. If Lott demonstrated one thing clearly, concealed carry doesn’t increase crime like Handgun Control, Inc. argues and that is important to know.

Even more troubling is the Kleck has done much better work. No work is perfect, but Kleck seems to have tried to do honest well designed research into DGUs and yet John Lott’s numbers seem to be overly important in the discussion. There have been some issues taken with Kleck’s work because he uses self-reported data, but that crticism is far different and far less troubling than the issues with Lott’s work. If Kleck’s estimates are problematic, it is due to problems of collecting quality data even with careful planning. Lott is careless in the most basic tasks of his efforts.

Finally, Lott’s new book is coming out by Regnery? Oh come on….

Lott Category Question

Calpundit points out the demographic questions that Lott asks and I’m even more confused.

Lott claims there are 36 categories of demographics he uses to weight the respondents. The demographics collected include area code, gender, race and age by decade. And this is a real question–what are the 36 categories. I don’t have the book handy so maybe this is bloody obvious and I’m just being obtuse. Assuming the demographics are brokend down area code would be one category (?), gender two, race-four, and age by decade 7 or 8. The math doesn’t make any sense.

I don’t understand how one can reconcile 36 categories to those questions. Either there are more categories or less. Or am I missing something? I could be missing something, but I don’t know what.

This matters because he says that the 36 categories were used for weigting.

Chicago Trib Round-up

Consider this a catch-all for interesting stuff:

Eric Zorn has been crusading on the Death Penalty along with the investigative journalists Mills and Armstrong. For those who watched CNN’s coverage of Ryan’s Commutation speech, Zorn was the guy standing next to Mills when they interviewed him. Zorn had a bemused look at his less well known colleague getting some national recognition. A few days ago Zorn followed up with Anthony Porter’s original lawyer, Dan Sanders. Unfortunately, Sanders seems to have had a breakdown and is now living with his mother and playing scrabble. One can hope there is more to his story than what we see today.

In the same column, Zorn blast Mosley-Braun and gives warm thoughts to Obama in the Senate race. Dan Hynes has a ton of organization support and he’ll be tough to beat, but it is hard to find anyone as exciting as Obama.

As Talk Left pointed out yesterday, Thomas Breen is working to clear two men he convicted and the Chicago Tribune gives him a at tribute today.

License for Bribes Fawell trial continues and Fawell is in trouble.

Daley’s Bold Move

Often someone as entrenched as Richard Daley would simply shrug off Chicago’s high murder rate and keep on keep on winning elections. Much like his much praised effort to improve the Chicago Public Schools, he pushing for a significant change in the deployment of police in Chicago. As the Tribune points out this is a politically explosive move. Residents in safe neighborhoods take their police protection for granted and this is going to rankle many.

Bold leadership is required to continue the urban resurgence seen in our major cities. Daley, with all of his faults, is providing that leadership.