Impact Evaluation and the “Not in My Backyard” Syndrome: Comments and Discussion

My post last Monday on how everyone is talking about impact evaluation but no one seems to want to be evaluated — or NIMBY, for “not in my backyard” — has generated a good bit of discussion, which I think is great. Here is a sample.

From the comments, Ben writes:

I’m sympathetic to the NGO guy. Put yourself in their shoes:
1. The evaluator is going to tell you whether you program is a success or failure using a metholodgy you don’t understand in a process completely out of your control
2. The methodology is billed as “scientific” and this is intriguing, many people put great stock in this methodology. However, it does have some smart critics who seem to make good points.
3. You’re aware that evaluations using this methodology sometimes find no impact, not because the project has failed but because of data problems, timing, etc. You’re not sure how you would try to explain this if it happened in your case.
4.You’ve seen practitioners of the methodlogy in a room together, tearing apart each other’s results, making you wonder how strong these results are really going to be and who’s going to come along and tear apart the results of your evaluation.
5. You’ve heard stories of academics completely losing interest in evaluating a project and leaving it to ruin as soon as it becomes clear that the results won’t make it into a hgihly ranked journal

Would you really want your project evaluated? I wouldn’t. In fact, I think it’s more suprising when someone wants to have their project evaluated than when they don’t.

Those are all good points, and all points which I am sympathetic to. Yet I cannot help but think that the methodology — an RCT evaluating what the NGO is known for — is probably the clearest, cleanest methodology available for impact evaluation. Ben’s conclusion (“Would you really want your project evaluated? I wouldn’t”) does highlight my initial point, however, which was: Everyone’s talking about this, but no one wants it. And if no one wants it, why on earth do those NGOs have directors of monitoring and evaluation? My hunch: Because the appearance of wanting to monitor, evaluate, do research, and learn keeps donors coming back with more money.

MJ, from the Bottom Up Thinking blog, writes:

No freebies are truly free. Volunteers always need management support to be of any use, and get anything out of their internship. Your offer is akin to a tied donation (only to be used for impact evaluation by UofM).

Now maybe you know this NGO’s project area intimately, but as an NGO practitioner I have gotten more than a bit wary of these kind of offers. Inevitably the researchers need all kinds of logistical support etc. Even if you guys do not, and even if that was totally clear in your email, a busy person (as B*** claims to be) may have misread the email. On the other hand if you turned up with an offer of, say, 30% of the core costs of the project being evaluated, plus a free evaluation (by rigorous, unsentimental academics without a particular axe to grind), then I would hope/expect they would have been much more enthusiastic.

Here is the deal, though: What I offered them was that we would reproduce their intervention in villages that have not yet been treated; this would have been entirely on my dime. All that I wanted to know was how, exactly, the intervention is conducted so I can reproduce it faithfully. Surely there are protocols written somewhere about that that can be sent to researchers interested in the intervention? To be fair: B*** did put me in touch with a retired academic who is supposedly helping the NGO, but that person never responded to my emails either. Moreover, it’s not clear that that person has conducted any research on the NGO’s policy intervention, beyond a few op-eds.

On Reddit, yoghurthear wrote:

Chambers wrote a little about the culture of ‘negative academics’ and ‘positive practitioners’, saying that the former look for what has gone wrong and the latter look for things which might go right (paraphrased somewhat – he was far more eloquent!). As [Redditor] dontjustassume noted, many ngos may not want evaluation for projects which have gone wrong because it can lead to a serious withdrawal of funding. Yet, of course, the need to learn from mistakes is of such importance.

I can highly recommend Chambers’ book ‘putting the last first’. It’s completely accessible and a really interesting read.

Thank you for the recommended reading, which is now in my wishlist (Amazon link here). And the “withdrawal of funding” argument was the gist of my post, really, though I now realize it might not have been terribly clear. Whereas many people think NGOs are out to maximize social welfare (however defined in view of each NGO’s activities), I suspect many NGOs are out to maximize the amount of money received from donors subject to the constraint that they at least appear to be doing good. Am I right or wrong? Ultimately, it’s an empirical question, but exchanges like the one I’ve had with the NGO in my original post make me cynical.

Redditor dontjustassume had initially commented:

On the one hand it is kind of obvious that NGOs might not want their work to be evaluated, and it might not be because they believe so strongly, but because they know that a particular project was crap. On the other hand, the author seems to be seriously underestimating the amount of work usually involved on NGO side in servicing a third party evaluation.

About the latter point: That is true, I imagine some researchers can get pretty high-maintenance over the course of an impact evaluation, and I should definitely be clearer next time about how low-maintenance my colleagues and I are going to be. Live and learn.

This is exactly the kind of healthy discussion I was hoping to foster with my post (and with all of my more serious posts, really). Contrary to what the act of blogging often seems to imply, I don’t think I have a monopoly on truth, and to paraphrase a friend and colleague, I have “strong opinions, weakly held.” So just as I hope I can convince some of you, I also hope to change my mind through discussion.*

 

* Yes, that includes GMOs. As soon as someone shows me a rigorous, peer-reviewed piece on how GMOs have negative consequences on human health, I’ll be very happy to change my views. No, the Séralini study does not count.

No related content found.

4 comments

  1. MJ

    One thing I forgot to mention in my previous comment, is that I used to strongly agree with the assertion you made in your original post about smelling a rat when people excuse themselves for lack of time. My view was, and still is to a significant degree, that you make time for the things you consider important.

    But that was before I ended up a swamped manager of a small NGO. Now my median response time for unsolicited emails is probably over a month. Staff at larger NGOs ought to be less stretched, and the obvious overlap with his/her responsibilities might, you would hope, have elicited a more rapid and fruitful reply, but never underestimate the degree to which NGO staff may be totally overworked. Its in our blood.

    On the other hand I have also been frustrated by lack of response from people in NGO-land to what ought to be simple requests that clearly fit with their publicly stated mission.

    My life is full of such contradictions!

  2. ben

    Thanks for responding to my comment, I’ve enjoyed the broader discussion on this topic as well.

    When you add it all up, I agree with your point here- NGOs and other funders claim a greater commitment to accountability and evaluation than their behavior reveals their true preferences to lie. However, in my experience (which is as an economist who designs and implements impact evaluations), this reflects less of a willful decision to deceive in order to please donors and more of a complicated tension between individuals within the same organization who have differing beliefs and interests. Within any NGO or other funder, there are some people who believe in evaluation, and some people who are skeptical or outright resistant. In my work, I’ve been in many situations where I’ve been frustrated by program staff who aren’t on board, but I’ve never been in a situation where it felt like I’d been hired to do an evaluation under false pretenses. As I said, in the aggregate I think you’re right and there’s an underlying disconnect. But I would think of it less as a unitary decision on the part of the organization to pursue a strategy of hypocrisy, and more of an equilibrium outcome within the organization.

    That makes our lives as evaluators more complicated- we can’t just throw our hands up and write them all off as hypocrites, we actually have to work on changing peoples’ minds at the margins. But in my view, learning how to do that better is one of the most worthwhile things we can do.

    Lastly I completely agree with your point that a big advantage of RCTs is that the methodology is completely comprehensible to people who don’t have a highly technical background, unlike quasi-experimental methods. That’s an argument in favor of RCTs that doesn’t get as much attention as it should.

  3. Pingback: Impact Evaluation and the "Not in My Backy...