Skip to content

Assessing the Extent of Student Cheating with List Randomization (Updated)

Last semester in my principles of microeconomics course, one of my teaching assistants (TAs) caught some of our students cheating on problem sets.

I use Mankiw’s Principles of Microeconomics when teaching that course. Because that textbook is widely used, it is perhaps no surprise that the solutions to book problems are (illegally) available online. And because I assign end-of-chapter problems as homework, it is perhaps no surprise that a few unscrupulous enterprising students would use those solutions to prepare their answers to problem sets.

What is more surprising is that some of those students would do so in plain view, in a common area next to the lecture hall where I taught that course. One student even copied the solution manual’s answers verbatim in her homework.

Method

After punishing the culprits, I decided to run a simple experiment to detect the extent of such cheating in my class. In doing so, I used list randomization, an elicitation technique used by Karlan and Zinman (2012) to detect the extent of misuse of funds obtained through microfinance loans.

To do so, the second midterm exam, which was administered soon after we became aware that students had been cheating, included a survey in which I presented the students with a number of statements and asked them to write, in a blank space at the bottom of the list, the number of statements they agreed with. This was presented to the students as an effort to improve our teaching.

There were two versions of the exam:

  1. Version A (control) included a list of subjective statements (there 11 statements such as “I find the recitation sections useful” or “I am learning a lot from this class”), and
  2. Version B (treatment) included the original list of statements–plus the statement “I have used the online solution keys in preparing my answers to problem sets.” Except for that one difference, versions A and B were identical.

Now, we asked nothing about which statements they agreed with: we only asked about how many of the statements they did agree with.

Assignment to the treatment or control group was random. Before handing out the exam, I had made a single pile in which A and B versions of the exam alternated. Those were then handed out to students by the TAs, who were unaware that I was conducting that experiment.

Since the TAs did not know which version they were handing to which student, randomization should be “clean,” i.e., independent of student characteristics, and so any difference between the number of statements agreed with on average between the treatment and the control group should be due solely to the presence of that one extra statement.

Results

In the control group (n=18), the average student agreed with 7.9 statements (standard deviation of 2.4), and that in the treatment group (n=14), the average student agreed with 9.7 statements (standard deviation of 3.1).

A t-test of equality of means between the two groups indicates that the difference between the control and treatment groups is significant at the 10 percent level. In other words, there is a less than 10 percent probability that there is no difference between the number of statements agreed with in each group.

This supports the hypothesis that the difference between the number of statements agreed with in the control group was significantly different from the number of statements agreed with in the treatment group. In other words, this constitutes evidence that cheating had occurred among a statistically significant proportion of students.

Update: Ben Lauderdale pointed out a mistake I’d initially made in calculating the proportion of cheaters. Ben further adds that “it is an unfortunate feature of list experiments that out of bounds estimates are possible” and sends links to two additional articles, here and here, on the use of list experiments.

Even after fixing said mistake, it looks as though my experiment suffers from a small-N problem (my quick, back-of-the-envelope calculation suggests 80 percent of students cheated!) Indeed, letting [math]C[/math] and [math]T[/math] denote control and treatment groups, respectively, I recover the proportion of students who cheated by (i) computing the weighted average [math]\mu=w_{C}\mu_{C}+w_{T}\mu_{T}[/math], where [math]w_{i}[/math] and [math]\mu_{i}[/math] denote the proportion and the mean of each type [math]i \in {C,T}[/math], and (ii) taking the difference [math]\mu-\mu_{C}[/math].