The purpose of monitoring and evaluation can include generating useful knowledge that support learning to improve effectiveness and/ or supporting accountability for the use of resources. Different types of evaluations can be conducted for different reasons, to answer different questions, and at different stages. We focus the following specific intervention stages and M&E services:

  1. Planning and designing an intervention to increase its chances of success: design and diagnostic evaluations (including needs assessments) can review the evidence base and intervention context to inform intervention design. This can include developing an intervention theory and logical framework to clarify the linkages between resources, activities and deliverables (services and products) or change processes and mechanisms, and desired benefits (or outcomes) and longer term impacts. In addition, different types of synthesis evaluation such as systematic reviews can be conducted to systematically identify what is already known in similar contexts about what works to address a specific problem and// or achieve a specific outcome.
  1. Strengthening intervention effectiveness and feedback and continuous improvement: Formative evaluation of interventions currently undergoing implementation to gather information that can be used to improve or strengthen the implementation of an intervention through conducting implementation or process evaluations. Process or implementation evaluations seek data with which to understand what’s actually going on in an intervention (what the intervention actually is and does), and whether intended service recipients are receiving the services they need. Process evaluations examine the processes involved in delivering the intervention and are intended to help intervention implementers, designers, and managers, to address challenges to the intervention’s effectiveness.
  1. Strategic planning and accountability: Summative evaluations conducted near, or at, the end of an intervention to show whether or not the intervention has achieved its intended outcomes. This comprises outcome or impact evaluations and can either use experimental or quasi-experimental methods (using approaches such as realist evaluation and contribution tracing and where no comparison group not receiving the intervention exists) as well as cost-effectiveness or value for money analysis. These gather and analyze data to show the ultimate, often broader range, and longer lasting, effects of an intervention. Summative evaluations seek to determine whether the intervention should be continued, replicated, or wound down or stopped. ADD CE/ VFM QUESTIONS.
  1. Strengthening organizational systems for monitoring, learning and improvement: Monitoring and evaluation systems development: organisations need to design and implement systems which support effective data collection, reporting and utilization as part of the organisation’s decision-making systems in order to ensure that effective learning and continuous improvement in support of the organisation’s objectives takes place. This can include performance monitoring and reporting systems as well as Monitoring and evaluation policies and results frameworks.

In order to show whether interventions are making a difference, Impact Economix designs cost effective, tailored data measurement and data collection strategies.  We are careful to identify data that is specifically indicative of the changes that interventions seek to make and we also collect data that may show the unintended consequences of an intervention’s implementation. We use the right combination of quantitative and qualitative data collection methods (often together) to evaluate and answer evaluation questions:

  • Developing theories of change to identify what data needs to be collected (A Theory of Change answers key intervention logic questions such as “why do you do what you do” and “how will it make a difference”? A Theory of Change guides strategic decisions and actions and is also an effective communication tool to promote a shared understanding of an intervention among diverse stakeholders. It is a central building block that lays the foundation for planning, measuring, monitoring, evaluating, learning, and improving.
  • Surveys (either internet-based, telephonic or face to face depending on the context)
  • Base-lines to capture the level of development at the beginning of an intervention.
  • Participants’ pre- and post-tests to track what changes take place with intervention participants.
  • Focus groups to dig deeper into understanding complex issues and various perspectives
  • Key informant interviews to obtain expert inputs and discuss sensitive issues.
  • Document analysis containing intervention data.
  • Literature reviews of key theories and/or case studies for benchmarking and to learn from
  • Observations of program operation in the field or implementation context.