Page tree

Monitoring and evaluation are part of a same system, commonly referred to as the M&E system. Although serving different purposes, monitoring and evaluation are complementary assessments involving data collection, performance assessment, reporting and learning. 

  • Monitoring focuses on what has happened. It is a continuous and organised process of systematic data collection throughout the life of an initiative to oversee its progress in achieving the expected results. It generates information that feeds into future evaluations and impact assessments and provides a solid evidence base for policymaking. Monitoring is primarily internal (based on data collected and analysed by the interventions) and occasionally external (based on data triangulation, including monitoring data from the interventions, and analyses conducted by external experts). The standard external monitoring system in place for DGs INTPA and NEAR is the results-oriented monitoring (ROM) review. In addition, the values of corporate indicators (EURF, GERF, IPA and IPA III)1 for legal reporting collected by interventions are quality controlled centrally during the Results Reporting Exercises (RRE).  Monitoring data from intervention’s systems inform individual ROM reviews, evaluations and the overall results of the EU’s external actions measured by corporate indicators.  
  • Evaluation identifies and explains not only what changes – intended or unintended – have occurred, but how and why they have occurred and what learning can be derived from that. Evaluation goes beyond an assessment of what has happened; it considers why something has occurred and if possible, how much has changed as a consequence. It thus aims to draw conclusions about the causal effects of the EU intervention on the desired outcomes. 

Key features and differences between monitoring and evaluation. 

 

Monitoring 

Result-Oriented Monitoring (ROM) 

Evaluation 

What 

Daily management activity (piloting the operation) 

Ad hoc review of intervention’s performance carried out according to a standard methodology 

Analysis for in-depth assessment 

Who 

Internal management responsibility – all levels (EC and implementing partner) 

Always incorporates external inputs/resources (objectivity) 

Usually incorporates external inputs/resources (objectivity) 

When 

Ongoing 

Periodic – on demand or if intervention is facing problems 

Ex ante, periodic (midterm, final), ex post 

Why 

Check progress, take remedial action, update plans 

Check progress, take remedial action, provide input to follow-up actions 

Learn broad lessons applicable to other interventions, policy review etc. 

Focus 

Inputs, activities, outputs, outcomes 

Relevance, design and monitoring system, efficiency, effectiveness, sustainability, coordination, EU added value, cross-cutting issues and communication & visibility 

Rationale, relevance, outcomes, impact, sustainability, coherence, EU added value and other criteria as relevant 

 The interventions’ internal monitoring systems are first shaped at the design stage based on the context analysis, the needs and aspirations of the beneficiaries of the interventions and the results prioritised in relevant programming documents (i.e. MIPs, NIPs and RIPs where the strategic overall and specific objectives are already defined). By design, each intervention contributes to the broader strategic objectives and the internal monitoring systems of the interventions should also measure these contributions using relevant indicators. 

Lessons learned 

The initial response strategy should consider lessons learned from past and ongoing interventions, including those promoted by governments, other development partners and by the EU. 
Regarding EU-funded interventions, the main learning sources are: 

  • Reports from other interventions. 
  • Results-oriented monitoring (ROM). ROM entails an external independent snapshot of the implementation of an intervention at a given moment. It serves not only as a support tool for intervention management by informing stakeholders about the performance of a specific intervention, but also contributes towards lessons learned for further programming and design, and future implementation of interventions. 
  • Intervention evaluations. These are evaluations that analyse the results of a specific intervention, or a group of logically interlinked interventions within the frame of a wider scope of collaboration in a country or region. They provide an in-depth understanding of an intervention's performance and lessons learned leading to improved current and/or future interventions in the country/region/sector of operation and/or elsewhere. They are the responsibility of the EU delegations or DG INTPA operational units in charge and are complementary to ROM and internal monitoring. 
  • Strategic evaluations. These assess the results of the combination of EU’s external spending and non-spending actions, analysing EU strategies, policies, instruments or modalities over a significant period of time. They contribute to accountability by assessing the quality of INTPA development aid as a whole and provide recommendations and lessons for policy formulation and programming. They are managed by DG INTPA Unit D4 ‘Quality and Results, Evaluation, Knowledge Management’, which maintains a published work programme. 

Ex ante evaluation 

During the design phase (identification and formulation), an especially relevant tool might be the ex ante evaluation. An ex ante evaluation is an instrument supporting design and facilitating later monitoring and evaluation of an intervention. Ex-ante evaluations are often carried out as part of the identification and formulation studies. An ex ante evaluation is used to: 

  • Test likely effects of different scenarios/hypotheses; 
  • Support intervention design and its results chain, ensuring quality/feasibility;  
  • Directly influence decisions upstream from implementation, transposing lessons from previous experiences; 
  • Prepare for future evaluations (establish clear indicators, targets and baselines). 

Ex ante evaluations are important in understanding different outcome scenarios to benchmark the type of effect sizes that can be expected across a range of indicators, to examine the cost-benefit or cost-effectiveness of the planned intervention, and to estimate the effects of reforms before their implementation. Ex ante evaluation is used to verify the need for the intervention and to set targets for its outcomes; this is done by verifying the intervention outline and its anticipated outcomes and by establishing outcome indicators. 

Monitoring at early design 

During the identification and formulation stages, the basis for the monitoring system starts to be laid down. The chain of results and its corresponding indicators should coherently translate the intervention’s logic. Baseline and final target values for each indicator should be specified based on existing monitoring data. Finally, consideration of the availability of the sources of information needed to track the indicator’s data is critical. The Commission has put in place a dedicated service to support the designs of logframes (SDL service) at the contracting and implementation stages. An SDL can be requested by the Operational Managers of DG INTPA, DG NEAR and FPI.  

Methodological fiche(s): 

Guidance on internal monitoring for results 

Evaluation methodology