Monitoring before Impact Evaluation

The case for finding the ‘right fit’

By Dr Maithreyi Gopalan

Long before I began obsessing over statistical models and randomised control trials to understand what interventions or policies work in promoting student outcomes in education, I used to volunteer for an education nonprofit in Chennai. The nonprofit, a wonderful after-school programme, based in four urban slums in Chennai, had a host of programming—such as academic homework supports, one-on-one tutoring, computer training classes, and extra-curricular classes that provided exposure to music, dance, theatre, and sports for children attending local K-12 schools. The nonprofit was founded and run by close friends of mine with whom I have had long conversations and debates about the impact the after-school programming had on the children that attended it. Did the academic support change students’ performance in schools? Did the extra-curricular classes improve students’ creativity? How can we tell for sure? In other words, would the children’s performance and lives be different if the educational nonprofit did not exist in those slums?

Academics like me obsess over impact evaluation by trying to understand such hypothetical questions—also known as counterfactual thinking to understand the causal effect of a programme or intervention. Not just academics but also funders, governments, and other large multinational organisations such as the United Nations and the World Bank obsess over impact evaluations of nonprofits. However, many mission-driven organizations, even the most diligent ones, such as my friend’s educational nonprofit in Chennai, struggle to unpack the true impact of their organisations. Yet, what I have realised now after many more years of training in programme evaluation during my doctoral studies is that I may have been asking the wrong question to my friend. Or more importantly, asking the question at the wrong time—the early stages of evolution of a nonprofit such as my friend’s.

Before we can ask the question: What impact does the education nonprofit have on student outcomes, we need to take several more baby steps. As the saying goes, we should not run before learning to crawl… or something to that effect. And those baby steps are the ones that authors Dean Karlan and Mary Kay Gugerty eloquently describe in their latest book: The Goldilocks Challenge: Right-Fit Evidence for the Social Sector. I cannot highly recommend this book in its entirety for nonprofits, but I provide a summary of its main thesis below.

The push from donors, funders, and governments across the world has led to a vast increase in the need for rigorous impact evaluations of nonprofits and interventions in the development sector. While the push for rigorous programme evaluation using gold standard practices such as randomised control trials (RCTs) that can definitively answer the “impact” question has, in most cases, been a step in the right direction; it has also caused an increase in poorly conducted impact evaluations. Ill-defined impact evaluation studies end up being costly and result in wasted money that would have been better off being invested in a much more important and preliminary step for nonprofits—monitoring. The authors provide a guide for nonprofit organisations to build robust monitoring plans and find “right-fit” [1]evidence strategies before they embark on a chase to show “impact”. The authors show that the “right-fit” evidence system also entails consideration of when to measure impact not just how.

Professors Dean Karlan and Mary Kay Gugerty are those rare academics who straddle the research-practitioner space elegantly—a constant aspiration for an early career researcher-and wannabe practitioner like me! Both have vast experience with nonprofits worldwide and have collaborated with many organisations to evaluate impact as well as provide consultations to several nonprofits on effective monitoring that can aid accountability and adoption of effective management strategies. So, they provide several case studies and specific guidelines on how organizations can develop effective monitoring systems. For instance, by providing specific examples, the authors also show how many organisations end up collecting more data than they may have the skills or resources to analyse, which results in wasted resources and several inefficiencies. On the other hand, they also caution against the collection of the wrong data and a simple tracking of changes in outcomes over time without paying close attention to whether the organisation caused the changes, or they just happened to occur alongside the programme.

They use an easy, simple, acronym—CART—described below to guide organisations’ monitoring efforts, specifically the data collection strategies within their monitoring system.  CART stands for:

* Credible: Collecting high quality data even if on a few select metrics that can be analysed appropriately,
* Actionable: Collecting data that will actually provide the organisation first and foremost to understand their own mission and improve future decisions and strategies;
* Responsible: Collecting data at an affordable cost such that the benefits outweigh the costs; and
* Transportable: Collecting data that can build knowledge not just for the organisation but for other similar programmes that can be used in the future and by others.

While the above principles may seem straight-forward, the book offers so many specific case studies that brings these abstract, simplistic principles to light effectively. I also really liked the authors emphasis on one of the very first steps that organisations should take when developing their monitoring plans—a clear articulation of the theory of change embedded in the nonprofit’s core mission. Again, through examples, the authors show how a theory of change articulates the inputs that go into a programme, what activities gets done, and the logical change that is expected to result in line with the inputs and activities in the world. Such clear articulation of the theory of change of a programme(s) in an organisation helps clear any cluttered/confusing ideas about how or why a programme(s) works, which can, in turn, result in significant variations in how the programme is implemented across time or sites. The embedding of a clear theory of change can guide right-fit data collection and monitoring strategies because it makes clear what metrics needs to be tracked.

Further, it provides the clearest feedback to guide programme learning and continuous improvement for an organisation. Finally, the book also provides specific guidance on how the organisations can use the monitoring tools to ask questions about implementation fidelity. Does the programme implementation follow the known logic model and underlying theory of change? This might be the most important question a nonprofit could ask in the early stages of its operations. Without such monitoring strategies in place that can clearly articulate the mission, vision, and implementation fidelity of programme(s), a chase to assess impact or conduct impact evaluations is at best misguided and ill-timed.

In all, I am a huge fan of this book, if you couldn’t tell until now! But how can we ensure that nonprofits and other organisations take their guidance seriously. The ever-increasing demand and push for conducting impact evaluations has changed the incentives away from investing in and building robust monitoring systems in the first place.

Second, there’s far less support from researchers in this regard for organisations. Cash-strapped organisations driven by a mission to create change in the community in which they operate hardly have the time and resources to develop and implement effective monitoring plans. This is where I see a huge role for research-practice partnerships, which help set up a mutually-reinforcing cycle of symbiosis between scholars and practitioners. I believe that researchers, like myself, should actively partner with nonprofits and other organisations to help build monitoring plans and systems; and not just partner to conduct impact evaluations that are more easily publishable in peer-reviewed journals. By establishing longer-term partnerships with nonprofits and organisations focusing on bringing about social impact, researchers can begin with consultations on earlier monitoring systems first, with an eye towards impact evaluation in the future.

Coming full circle, I now am convinced that my friend running the educational nonprofit I described earlier was right—we need to start with baby steps… monitor program implementation and design before jumping onto impact evaluations. Developing robust data collection systems that follow the CART principles is one great place to start.

(If you don’t have access to the book yet, you can find more resources about the topic here)

Dr Maithreyi Gopalan is an Assistant Professor at The Pennsylvania State University. She recently completed her Ph.D. in Public Policy from Indiana University, Bloomington. Her research interests lie in program evaluation, specifically within education. In her research, she asks: What education policies/interventions work and how evidence from these interventions and evaluations can be used to design and implement effective education policy? Prior to her doctoral studies, she spent close to a decade in finance in India and London. She has always been passionate about education being one of the primary drivers of upward mobility and wellbeing both at the individual-level and the societal level. She hopes to use her analytical skills to improve educational outcomes for children across the world by promoting effective evidence-based policies.


Showing 2 comments
  • Smita Kelton Shah
    reply

    Ms Gopalan this is to invite you to the work of Jaya Organic Yojana – please come and see our work in Karnataka thank you.

    • admin
      reply

      Thank you for the comment. We will share this with the author

Leave a Comment