Skip to content

Statistical Inference and Ambiguity Aversion: Making Sense of the GMO Debates

It seems as though every time someone posts something on GMOs, two kinds of people come out to comment: those who are for GMOs, and those who are against. Increasingly, it seems to me that both sides are talking past one another, and that little to no progress is being made because the inflamed rhetoric on both sides has so far failed to convince anyone that the other side might be onto something.

With this post, I’d like to make sense of the current debate surrounding GMOs. I don’t want to discuss intellectual property and corporate behavior here, as those are topics best left for future posts. Likewise for my belief that between consuming GMOs on the one hand and being malnourished, undernourished, or dying of hunger on the other hand, I’ll take consuming GMOs.

I’d just like to present what I see as good arguments on each side, in order to introduce a bit of reason in the whole debate. There are smart, rational people on both sides of the GMO divide. It’s just sometimes difficult to hear their voices amid the shrill debates in which the attention is focused on who screams the loudest.

Pro: Where Is the Science?

What many in the anti-GMO crowd seem to miss is the fact that so far, no serious scientific study (i.e., peer-reviewed and published in a reputable journal) has shown GMOs are harmful to human health, and when that fact is combined with the economics of scientific publishing, this is a crushing pro-GMO argument.

Indeed, statistical inference relies on testing hypothesis like the null hypothesis “H0: GMOs have no impact on human health” versus the alternative hypothesis “HA: GMOs are harmful to human health.” This isn’t just about GMOs: null hypotheses are formulated as “there is no impact/association/causal link,” whereas alternative hypotheses are formulated as “there is an impact/association/causal link.” I could just as well have written “H0: Land titles have no impact on agricultural productivity” versus “HA: Land titles increase agricultural productivity” (and I have).

Now, journal editors don’t particularly like publishing articles that fail to reject H0. In fact, most of them would much prefer publishing articles that reject H0 in favor of HA. For example, an article finding that there is no relationship between the color of one’s socks and her wage is completely uninteresting; an article finding that there is such a relationship is interesting, because it hints that there might be a causal link between socks and labor productivity. The former is highly unlikely to get published anywhere; I suspect the latter, if done well and if it contained a good story, would be interesting to some labor economics journals. By the way, this is generally called publication bias:

[T]he tendency of researchers, editors, and pharmaceutical companies to handle the reporting of experimental results that are positive (i.e. showing a significant finding) differently from results that are negative (i.e. supporting the null hypothesis) or inconclusive, leading to a misleading bias in the overall published literature.

The fact that after all that time and energy spent on looking for the harmful effects of GMOs, we still haven’t found statistical support for them (again, the Séralini study does not count; it was garbage) should clue you in as to whether they actually exist.

When you combine that with the economics of journal publishing — journal editors compete to publish the best results — and you see neither Science, nor Nature, nor other journals like PNAS have actually reported findings that show that GMOs are harmful to human health, you have a pretty convincing case (if you think that there is a conspiracy to silence anti-GMO scientific findings, then there is no hope for you).

(Moreover, note that we can never “prove” a null hypothesis. At best, we fail to reject it. So anyone asking for “proof” that GMOs are not harmful might as well be asking for “proof” that unicorns do not exist. How do you go about proving a negative?)

In that debate, I am often astounded by how anti-scientific the anti-GMO crowd can be. This is especially so given that the anti-GMO crowd is largely composed of people on the left; the same who are so often heard saying how anti-scientific people on the right can be when it comes to evolution, the existence of God, and so on.

Con: Ambiguity Aversion or Overweighting of Small Probabilities

Even if you buy the science discussed above, however, it is difficult to deny that ambiguity aversion can be a perfectly rational reaction to new technologies such as GMOs. Recall that in economics, we often talk of risk aversion, i.e., aversion to situations characterized by an uncertainty whose probability distribution is known (e.g., “There is a 30% likelihood that you will die of lung cancer if you haven’t quit smoking by age 55”). But we also talk of ambiguity aversion, i.e., aversion to situations characterized by an uncertainty whose probability distribution is unknown (e.g., “There is a significant likelihood that you will die of lung cancer if you haven’t quite smoking by age 55”).

People who are ambiguity-averse can have strong reactions to ambiguity, preferring to act seemingly pessimistically, or as lacking confidence in the information they receive on probability distributions.

Not knowing the true probability distribution around the fact that GMOs might be harmful to human health, someone who is ambiguity-averse might rationally behave as if the likelihood that GMOs are harmful to human health is very high, a reasoning which would go something like “Well, maybe all the scientific evidence points to GMOs being harmless, but what if the right rigorous, peer-reviewed study showing that they are harmful has not come out yet? Better safe than sorry, I’m better off just not consuming GMOs.”

Alternatively, there is also the fact that we often overweight small-probability adverse events, as social science has learned from Kahneman and Tversky, (1979). In their famous experiments that led them to develop an alternative theory of decision under uncertainty, Kahneman and Tversky showed that people tend to treat a 1% likelihood that something bad will happen as much bigger than 1%. That could also explain some people’s distaste for GMOs when it comes to their own consumption of food, and this also provides a perfectly logical ground on which to argue in favor of GMO labeling.