Monitoring and evaluation are part of a same system, commonly referred to as the M&E system. Although serving different purposes, monitoring and evaluation are complementary assessments involving data collection, performance assessment, reporting and learning.
Key features and differences between monitoring and evaluation.
| Monitoring | Result-Oriented Monitoring (ROM) | Evaluation |
What | Daily management activity (piloting the operation) | Ad hoc review of intervention’s performance carried out according to a standard methodology | Analysis for in-depth assessment |
Who | Internal management responsibility – all levels (EC and implementing partner) | Always incorporates external inputs/resources (objectivity) | Usually incorporates external inputs/resources (objectivity) |
When | Ongoing | Periodic – on demand or if intervention is facing problems | Ex ante, periodic (midterm, final), ex post |
Why | Check progress, take remedial action, update plans | Check progress, take remedial action, provide input to follow-up actions | Learn broad lessons applicable to other interventions, policy review etc. |
Focus | Inputs, activities, outputs, outcomes | Relevance, design and monitoring system, efficiency, effectiveness, sustainability, coordination, EU added value, cross-cutting issues and communication & visibility | Rationale, relevance, outcomes, impact, sustainability, coherence, EU added value and other criteria as relevant |
The interventions’ internal monitoring systems are first shaped at the design stage based on the context analysis, the needs and aspirations of the beneficiaries of the interventions and the results prioritised in relevant programming documents (i.e. MIPs, NIPs and RIPs where the strategic overall and specific objectives are already defined). By design, each intervention contributes to the broader strategic objectives and the internal monitoring systems of the interventions should also measure these contributions using relevant indicators.
Lessons learned
The initial response strategy should consider lessons learned from past and ongoing interventions, including those promoted by governments, other development partners and by the EU.
Regarding EU-funded interventions, the main learning sources are:
Ex ante evaluation
During the design phase (identification and formulation), an especially relevant tool might be the ex ante evaluation. An ex ante evaluation is an instrument supporting design and facilitating later monitoring and evaluation of an intervention. Ex-ante evaluations are often carried out as part of the identification and formulation studies. An ex ante evaluation is used to:
Ex ante evaluations are important in understanding different outcome scenarios to benchmark the type of effect sizes that can be expected across a range of indicators, to examine the cost-benefit or cost-effectiveness of the planned intervention, and to estimate the effects of reforms before their implementation. Ex ante evaluation is used to verify the need for the intervention and to set targets for its outcomes; this is done by verifying the intervention outline and its anticipated outcomes and by establishing outcome indicators.
Monitoring at early design
During the identification and formulation stages, the basis for the monitoring system starts to be laid down. The chain of results and its corresponding indicators should coherently translate the intervention’s logic. Baseline and final target values for each indicator should be specified based on existing monitoring data. Finally, consideration of the availability of the sources of information needed to track the indicator’s data is critical. The Commission has put in place a dedicated service to support the designs of logframes (SDL service) at the contracting and implementation stages. An SDL can be requested by the Operational Managers of DG INTPA, DG NEAR and FPI.
EU restricted - Training Managing an Evaluation Process
Methodological fiche(s):
Guidance on internal monitoring for results