Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

How monitoring and evaluation influence future interventions?

INTPA is a learning-based organisation. As such, it aims at systematically translating accumulated knowledge and expertise into operational reality. This constitutes a continuous exercise of applied knowledge, with the ultimate objective of bettering link INTPA processes with the delivery of specific products and the achievement of intended results.

Learning (along with accountability, management and communication) is one of the purposes for monitoring, auditing and evaluating. For optimal results, learning from evaluations should be combined with learning from monitoring (internal, external/ROM, results collection) and from audits (intervention as well as the more strategic audits by the Internal Audit Service and the Court of Auditors). Monitoring, evaluation and auditing are supposed to provide an in-depth understanding of the performance of an intervention or of strategic issues, against initial expectations, thus forming a solid base for deriving lessons for future interventions or programming decisions.


The EU is committed to the ‘evaluation first’ principle, to make sure any policy decision takes account of lessons learned from past EU actions. INTPA staff relies to a great extent on lessons learned and findings from past experience to prepare new interventions and inform future decision-making: operational managers and decision-makers are interested in clear and operational recommendations credibly grounded on evidence. From this perspective, monitoring and evaluating (and auditing to a certain extent) are the institutionally established mechanisms by which to learn from experience, to better understand what works and what does not, and to accumulate knowledge for application in future actions.

  • Monitoring, ex ante and mid-term evaluations, as well as audits have an immediate impact on a given intervention, although lessons learned will certainly contribute to and shape future interventions — assuming the translation of knowledge into operational reality. In turn, the conclusions and recommendations of final and ex post evaluations feed directly into new actions in all phases of the intervention cycle, while the findings of strategic evaluations inform aid programming processes.
  • Evaluation is going beyond the assessment of what has happened, as it also considers why it has happened (the roleof the EU intervention) and how much EU actions contribute to positive change. Evaluations are one of the key components of the overall INTPA organisational learning effort, as represented by the monitoring and evaluation pyramid. Here, evidence produced by continuous internal monitoring supports and directs regular external monitoring exercises — which in turn provide findings further analysed by means of ad hoc evaluations at the intervention or strategic level. Image Added




Since 2020 INTPA has developed a new methodology for evaluations, aimed at increasing the regularity and quality of intervention-level evaluations and to strengthen the feedback loop of evaluation results into evidence-based policymaking, intervention design and implementation.





Where to find lessons learnt ?

Final reports

Final reports must have a very strong results focus. They should include lessons learned and a detailed explanation of results delivered at the output level and the intervention's contribution to the desired outcomes and impact. It is also important to capture ideas and recommendations for upcoming interventions in the same area, focusing on possible measures that can make results delivered to date more sustainable. The information obtained through final reports and evaluations should also help operational managers plan for continuation of an intervention (e.g. in the next annual action plan or Action Document). It is useful to have implementing partners make lessons learned available closer to the deadline for the next identification/formulation phase.

Designing a second phase of an intervention is inspired by what is observed and learned from the first phase. This cannot be exclusively based on the practical and individual experience of the concerned staff, as practice and experience are uneven across operational managers and often episodic. Operational managers can launch an evaluation building on information and analysis derived from ongoing monitoring, and thereby inform identification and formulation with a strong knowledge base. It is important to commission evaluations at the right moment, when implementing partners can provide their draft final report and before the start of the next design (identification/formulation) phase.

Data collected through internal monitoring systems and evaluations should be used by operational managers to make evidence-based decisions on the design of the next phase of a given intervention (or another intervention in the same sector or country), if that is planned within the MIP/NIP/RIP and determined necessary through political and policy dialogue.

Institutional memory is vital, and newly posted operational managers should examine lessons learned identified in past evaluation reports and records of meetings with key informants.

ROM experts also produce insightful thematic and country/regional fiches as part of their annual consolidated analyses. If external experts are contracted to support the identification and formulation process, operational managers should ensure that they review lessons learned before drafting a new programme. Former ROM Image Removed and EVAL Image Removed modules, now integrated in OPSYS Image Removed , are some of the practical tools that can facilitate this work, along with thematic groups at Capacity4dev.eu.

Evaluations

Along with monitoring (internal and external) and their associated reporting, evaluations are the other key component of the overall INTPA organisational learning effort, as represented by the monitoring and evaluation pyramid. Here, evidence produced by continuous internal monitoring supports and directs regular external monitoring exercises — which in turn provide findings further analysed by means of ad hoc evaluations at the intervention or strategic level.
Image RemovedOn average, the budget allocated to evaluations in

INTPA represents 0.4 per cent of the intervention budget According to 2018 data, this is about 175 evaluations per year, mostly at closure phase (final-54 per cent; ex post-7 per cent; mid-term-35 per cent; ex ante-1 per cent; strategic-3 per cent).

The number of evaluations conducted yearly may seem high but, it means less than two evaluations/year/service, a much lower level than that recommended by the Organisation for Economic Co-operation and Development (OECD).

Other than for strategic evaluations, which are centralised, Delegations or INTPA units in charge of the portfolio to be evaluated plan and manage their intervention-level evaluations. Since 2019, the planning exercise is done in the EVAL module, later labelled as Operational Evaluation Plan. From 2021, the plan will be fully integrated in OPSYS  Image Removed.

In recent years, INTPA has made substantial efforts to enhance its evaluation systems, mainly through establishment (2017) of the Evaluation Support Service to support capacity building for intervention-level evaluations by providing on demand, non-binding quality assurance; development of the EVAL module guaranteeing access to all intervention evaluations; a more thorough evaluation planning process involving Delegations and INTPA units; and a dedicated system of support to strategic evaluations (2019) through the Evaluation Support Service. Several problematic aspects remain, mainly in terms of recurrent over-planning and under-spending in evaluations, coordination between intervention and strategic evaluations, the quality of intervention-level evaluations, the relevance of strategic evaluations in effectively addressing policy needs and priorities, and communication and dissemination.

During 2020 INTPA developed a new methodology for evaluations, aimed at increasing the regularity and quality of intervention-level evaluations and to strengthen the feedback loop of evaluation results into evidence-based policymaking, intervention design and implementation.

Evaluations at the closure stage must consider operational needs while prioritising the best possibilities for lesson learning so as to effectively feed into the design of new interventions and policies. Two types of evaluations are used to respond to this purpose.

  • Final evaluations take place a few months before the operational closure of an intervention and should contribute to accountability by providing an assessment of the results achieved. They should also contribute to learning by providing an understanding of those factors that facilitated or hindered achievement of results, making their focus on why as well as what; and by identifying any lessons that would improve the quality of future interventions.
  • Ex post evaluations take place one to two years after the operational closure of a given intervention. They focus on the impacts (expected and unexpected) and sustainability of the intervention to draw conclusions that may inform new interventions. They describe the achievements towards which the intervention contributed, as well as how it did so or why it did not contribute as expected.

Evaluations assess interventions along the OECD Development Assistance Committee (DAC) evaluation criteria, i.e. relevance (often including the quality of design), efficiency, effectiveness, impact and sustainability (For definition, see Glossary of Key Terms in Evaluation and Results Based Management, 2002, and Glossary of Key Terms in Evaluation and Results Based Management - Second edition, 2022)

These are complemented by two INTPA-specific criteria: complementarity of assistance and EU added value. While some of the evaluation criteria examine the entire results chain and intervention logic, others focus on individual result levels (i.e. impact assessment and effectiveness, which revolves around the outcomes).

is currently developping a project for internal staff to provide support for a) on-demand knowledge from Monitoring and Evaluation and b) collect monitoring and evaluation knowledge to inform the design and programming phases.

INTPA has put in place an evaluation helpdesk for internal staff, that can be reached under the following functional mailbox: INTPA-EVALUATION-SUPPORT@ec.europa.eu.

Evidence Support

Other step:

Evaluation

Methdological fiche(s):

Evaluation Methodology

INTPA intranet page on Evaluation

...