Published on Development Impact

Book Review: Failing in the Field – Karlan and Appel on what we can learn from things going wrong

This page in:

Dean Karlan and Jacob Appel have a new book out called Failing in the Field: What we can learn when field research goes wrong. It is intended to highlight research failures and what we can learn from them, sharing stories that otherwise might otherwise be told only over a drink at the end of a conference, if at all. It draws on a number of Dean’s own studies, as well as those of several other researchers who have shared stories and lessons. The book is a good short read (I finished it in an hour), and definitely worth the time for anyone involved in collecting field data or running an experiment.

A typology of failures
The book divides failures into five broad categories, and highlights types of failures, examples, and lessons under each:

  1. Inappropriate research settings – this includes doing projects in the wrong place (e.g. a malaria prevention program where malaria is not such an issue), at the wrong time (e.g. when a project delay meant the Indian monsoon season started, making roads impossible and making it more difficult for clients to raise chickens), with a technically infeasible solution (e.g. trying to deliver multimedia financial literacy in rural Peru using DVDs when loan officers weren’t able to find audio and video setups).
  2. Technical design flaws – this includes survey design errors like bloated surveys full of questions where there is no clear plan for how they will be used in analysis, and poorly designed questions; measurement protocols being inadequate (they have an example where because their survey offered a couple of dollars in incentivized games, others would pretend to be the survey respondents to try and participate, and they didn’t have good id systems); and mistakes in randomization protocols (e.g. a marketing firm sorting a list of donors by date of last donation, and splitting it in half so that the treatment group were all more recent donors than the control)
  3. Partner organization challenges – a big part here is realizing that even if the top staff are committed, the lower-tier staff may have limited bandwidth and flexibility. A couple of examples were from programs which tried to use existing loan officers to deliver financial literacy training, only to find that many of them were not good teachers; and where bank tellers decided to ignore a script for informing customers of a new product because they felt it slowed them down from serving clients as quickly as possible.
  4. Survey and Measurement Execution Problems – these include survey programming failures (e.g. trying to program a randomized question ordering, but it ends up skipping the question for half the sample); misbehaving surveyors (who make up data); not being able to keep track of respondents (as noted in the impersonation example above); and measurement tools not working (as in my RFID example).
  5. Low Participation Rates – they separate this into low participation during intake (when fewer people apply for the program than expected); and low participation after random assignment (when fewer of those assigned to treatment actually take it up). They note how partner organizations are often overconfident on both accounts. They have examples of financial education in Peru where only 1 percent of groups assigned to treatment completed the full training, and a new loan product in Ghana where delays in processing and a cumbersome application process meant that while 15 percent of business owners visited the branch, only 5 percent applied and 0.9 percent received a loan.
After discussing each of these general categories with some examples, it then goes into more depth on case studies of six failed projects to provide a lot more detail on what went wrong and why, as well as lessons learned.

A few general lessons
So many of the failures seem to have come from lack of piloting/dealing with immature products– studies involving launching new products, but not getting all the implementation issues sorted out in advance beforehand. They note the challenge of doing this – the researchers and partners are often excited and eager to launch their new product, and adding a step that might take everyone back to the drawing board might seem politically untenable.
One lesson they note is that individual failures tend to snowball quickly if they are not caught, so many studies can find themselves facing many of these.
Another is that researchers find it hard to know when to walk away. They give an example of a project on supply-chain credit, where the researchers lined up funding, a partner bank, a consumer product distributor etc. and then kept hitting roadblocks such as software problems at the bank, or changes in design. After 3 baseline surveys, all of which had to be scrapped, and nearly three years, they finally abandoned the project – but “more than once they considered shutting down the project, but there always seemed to be a glimmer of hope that encouraged them to try again or hold another meeting with the partner”. Another example comes up in one of the case studies – when a research team had planned to run a project on sugarcane that fell through, they hastily put together a poultry loan product instead which failed.

Some reactions
I was struck most of all by how mundane many of the stories of failure were – products were launched too soon, they involved additional work for people in the partner organization who didn’t do this work, not enough people applied for a program, someone messed up survey coding etc. Failure here is not coming from the survey team getting accused of witchcraft, enumerators who have never ridden a motorcycle before claiming they were experts, the research team all contracting malaria etc. Instead it comes, by-and-large, from a set of problems that in retrospect could often have been controlled and may seem obvious to an outside party. This is why sharing these lessons is all the more important – these are not all self-deprecating funny stories, but things researchers may otherwise not share for fear of coming out looking bad.
The second point was that all of the examples in the book came from working with NGOs, reflecting much of the work Dean has done. Working with governments brings a whole new set of ways to fail. From basic bureaucracy problems to political economy to much less ability of researchers to control what is getting done, I am sure there are many lessons that can be drawn from such failures.

Share your experiences
This book is a great first step in sharing lessons from failure, but it is only a beginning. We agree with Dean and Jake that more learning from failure should take place. We are therefore pleased to announce that we will partner with them in having a special “learning from failure” series on the blog. We’ve collated some of our previous posts on failure in one place, and would love others to share their experiences. If you’ve got a failure to share, please email it to developmentimpactblog@gmail.com.
Points to note:
  • We are interested in examples of failures of impact evaluations (research project failures) as opposed to failures of development projects more generally
  • Please send it as a Word document in 11 point Calibri font.
  • If you have any pictures/graphs, send this as a .jpg attachment
  • Keep it short, and try and draw out the lessons for others
  • Make sure to be specific and concrete in the advice from this – as Markus told me “a lesson to work harder and pay more attention” isn’t so helpful.
  • The intention is not to embarrass anyone, but help improve work going forward. So anonymize parties as needed.

Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000