The purpose of monitoring and evaluation can include generating useful knowledge that support learning to improve effectiveness and/ or supporting accountability for the use of resources. Different types of evaluations can be conducted for different reasons, to answer different questions, and at different stages. We focus the following specific intervention stages and M&E services:
- Planning and designing an intervention to increase its chances of success: design and diagnostic evaluations (including needs assessments) can review the evidence base and intervention context to inform intervention design. This can include developing an intervention theory and logical framework to clarify the linkages between resources, activities and deliverables (services and products) or change processes and mechanisms, and desired benefits (or outcomes) and longer term impacts. In addition, different types of synthesis evaluation such as systematic reviews can be conducted to systematically identify what is already known in similar contexts about what works to address a specific problem and// or achieve a specific outcome.
- Strengthening intervention effectiveness and feedback and continuous improvement: Formative evaluation of interventions currently undergoing implementation to gather information that can be used to improve or strengthen the implementation of an intervention through conducting implementation or process evaluations. Process or implementation evaluations seek data with which to understand what’s actually going on in an intervention (what the intervention actually is and does), and whether intended service recipients are receiving the services they need. Process evaluations examine the processes involved in delivering the intervention and are intended to help intervention implementers, designers, and managers, to address challenges to the intervention’s effectiveness.
- Strategic planning and accountability: Summative evaluations conducted near, or at, the end of an intervention to show whether or not the intervention has achieved its intended outcomes. This comprises outcome or impact evaluations and can either use experimental or quasi-experimental methods (using approaches such as realist evaluation and contribution tracing and where no comparison group not receiving the intervention exists) as well as cost-effectiveness or value for money analysis. These gather and analyze data to show the ultimate, often broader range, and longer lasting, effects of an intervention. Summative evaluations seek to determine whether the intervention should be continued, replicated, or wound down or stopped. ADD CE/ VFM QUESTIONS.
- Strengthening organizational systems for monitoring, learning and improvement: Monitoring and evaluation systems development: organisations need to design and implement systems which support effective data collection, reporting and utilization as part of the organisation’s decision-making systems in order to ensure that effective learning and continuous improvement in support of the organisation’s objectives takes place. This can include performance monitoring and reporting systems as well as Monitoring and evaluation policies and results frameworks.