Tom Murphy, who is one of my favorite development bloggers, had an excellent post last week about development skepticism on the one hand, and lack thereof on the other hand:
“The aid skeptic is one which tries to seek the truth and is often accused of being a cynic at best and uncaring/disconnected at worse. When faced with the task of determining how billions of dollars should be used to alleviate poverty around the world and domestically, solutions should be found, tested and shared. What does not work should be openly admitted and quickly discarded. (…)
What is striking to me is the way that people react when faced with skepticism. Chris Blattman experienced this push-back when speaking at the DRI conference at the beginning of March. After presenting his ongoing research into the ties between poverty and violence, Blattman was met with strong criticism of his project. After spending 15 minutes saying that he was unsure about the causality in either direction, he was assailed for supposedly saying that poverty had nothing to do with violence.”
What Tom describes in the second paragraph above still baffles me, but it does speak to an important cognitive bias: people tend to give much more importance than is warranted to whatever evidence confirms their beliefs, and they tend to discard whatever evidence contradicts their beliefs. This is called confirmation bias.
When presented with evidence about a hypothesized causal relationship (e.g., “poverty does not lead to violence”), proper critical thinking should lead one to carefully evaluate whether the evidence is solid instead of defaulting to a knee-jerk reaction (e.g., criticizing the person who claims that “poverty does not lead to violence” because of some belief to the contrary). That is the reason why I spend the first two weeks of the fall semester teaching the students in my development seminar the various means social scientists have at their disposal to establish causal relationships.
Over the last decade, development economists have developed a number of methods aimed at establishing the validity of causal statements. But what good are those methods when policymakers have their own ideas about what works and what does not?
As an economist in a policy school, this is one of those things I don’t really like to think about. I nevertheless think social scientists in general — and economists in particular — should carefully think about how to engage with people who suffer from confirmation bias, as it is no longer sufficient to just put our findings out there for policy makers to use.
Confirmation Bias in Aid and Development
Tom Murphy, who is one of my favorite development bloggers, had an excellent post last week about development skepticism on the one hand, and lack thereof on the other hand:
“The aid skeptic is one which tries to seek the truth and is often accused of being a cynic at best and uncaring/disconnected at worse. When faced with the task of determining how billions of dollars should be used to alleviate poverty around the world and domestically, solutions should be found, tested and shared. What does not work should be openly admitted and quickly discarded. (…)
What is striking to me is the way that people react when faced with skepticism. Chris Blattman experienced this push-back when speaking at the DRI conference at the beginning of March. After presenting his ongoing research into the ties between poverty and violence, Blattman was met with strong criticism of his project. After spending 15 minutes saying that he was unsure about the causality in either direction, he was assailed for supposedly saying that poverty had nothing to do with violence.”
What Tom describes in the second paragraph above still baffles me, but it does speak to an important cognitive bias: people tend to give much more importance than is warranted to whatever evidence confirms their beliefs, and they tend to discard whatever evidence contradicts their beliefs. This is called confirmation bias.
When presented with evidence about a hypothesized causal relationship (e.g., “poverty does not lead to violence”), proper critical thinking should lead one to carefully evaluate whether the evidence is solid instead of defaulting to a knee-jerk reaction (e.g., criticizing the person who claims that “poverty does not lead to violence” because of some belief to the contrary). That is the reason why I spend the first two weeks of the fall semester teaching the students in my development seminar the various means social scientists have at their disposal to establish causal relationships.
Over the last decade, development economists have developed a number of methods aimed at establishing the validity of causal statements. But what good are those methods when policymakers have their own ideas about what works and what does not?
As an economist in a policy school, this is one of those things I don’t really like to think about. I nevertheless think social scientists in general — and economists in particular — should carefully think about how to engage with people who suffer from confirmation bias, as it is no longer sufficient to just put our findings out there for policy makers to use.
Share this:
Published in Commentary, Development, Policy and Social Sciences