Page tree

Final reports

Final reports must have a very strong results focus. They should include lessons learned and a detailed explanation of results delivered at the output level and the intervention's contribution to the desired outcomes and impact. It is also important to capture ideas and recommendations for upcoming interventions in the same area, focusing on possible measures that can make results delivered to date more sustainable. The information obtained through final reports and evaluations should also help operational managers plan for continuation of an intervention (e.g. in the next annual action plan or Action Document). It is useful to have implementing partners make lessons learned available closer to the deadline for the next identification/formulation phase.

Designing a second phase of an intervention is inspired by what is observed and learned from the first phase. This cannot be exclusively based on the practical and individual experience of the concerned staff, as practice and experience are uneven across operational managers and often episodic. Operational managers can launch an evaluation building on information and analysis derived from ongoing monitoring, and thereby inform identification and formulation with a strong knowledge base. It is important to commission evaluations at the right moment, when implementing partners can provide their draft final report and before the start of the next design (identification/formulation) phase.

Data collected through internal monitoring systems and evaluations should be used by operational managers to make evidence-based decisions on the design of the next phase of a given intervention (or another intervention in the same sector or country), if that is planned within the MIP/NIP/RIP and determined necessary through political and policy dialogue.

Institutional memory is vital, and newly posted operational managers should examine lessons learned identified in past evaluation reports and records of meetings with key informants. ROM experts also produce insightful thematic and country/regional fiches as part of their annual consolidated analyses. If external experts are contracted to support the identification and formulation process, operational managers should ensure that they review lessons learned before drafting a new programme. Former ROM and EVAL modules, now integrated in OPSYS , are some of the practical tools that can facilitate this work, along with thematic groups at Capacity4dev.eu.

Evaluations

Along with monitoring (internal and external) and their associated reporting, evaluations are the other key component of the overall INTPA organisational learning effort, as represented by the monitoring and evaluation pyramid. Here, evidence produced by continuous internal monitoring supports and directs regular external monitoring exercises — which in turn provide findings further analysed by means of ad hoc evaluations at the intervention or strategic level.

On average, the budget allocated to evaluations in INTPA represents 0.4 per cent of the intervention budget According to 2018 data, this is about 175 evaluations per year, mostly at closure phase (final-54 per cent; ex post-7 per cent; mid-term-35 per cent; ex ante-1 per cent; strategic-3 per cent).

The number of evaluations conducted yearly may seem high but, it means less than two evaluations/year/service, a much lower level than that recommended by the Organisation for Economic Co-operation and Development (OECD).

Other than for strategic evaluations, which are centralised, Delegations or INTPA units in charge of the portfolio to be evaluated plan and manage their intervention-level evaluations. Since 2019, the planning exercise is done in the EVAL module, later labelled as Operational Evaluation Plan. From 2021, the plan will be fully integrated in OPSYS  .

In recent years, INTPA has made substantial efforts to enhance its evaluation systems, mainly through establishment (2017) of the Evaluation Support Service to support capacity building for intervention-level evaluations by providing on demand, non-binding quality assurance; development of the EVAL module guaranteeing access to all intervention evaluations; a more thorough evaluation planning process involving Delegations and INTPA units; and a dedicated system of support to strategic evaluations (2019) through the Evaluation Support Service. Several problematic aspects remain, mainly in terms of recurrent over-planning and under-spending in evaluations, coordination between intervention and strategic evaluations, the quality of intervention-level evaluations, the relevance of strategic evaluations in effectively addressing policy needs and priorities, and communication and dissemination.

During 2020 INTPA developed a new methodology for evaluations, aimed at increasing the regularity and quality of intervention-level evaluations and to strengthen the feedback loop of evaluation results into evidence-based policymaking, intervention design and implementation.

Evaluations at the closure stage must consider operational needs while prioritising the best possibilities for lesson learning so as to effectively feed into the design of new interventions and policies. Two types of evaluations are used to respond to this purpose.

  • Final evaluations take place a few months before the operational closure of an intervention and should contribute to accountability by providing an assessment of the results achieved. They should also contribute to learning by providing an understanding of those factors that facilitated or hindered achievement of results, making their focus on why as well as what; and by identifying any lessons that would improve the quality of future interventions.
  • Ex post evaluations take place one to two years after the operational closure of a given intervention. They focus on the impacts (expected and unexpected) and sustainability of the intervention to draw conclusions that may inform new interventions. They describe the achievements towards which the intervention contributed, as well as how it did so or why it did not contribute as expected.

Evaluations assess interventions along the OECD Development Assistance Committee (DAC) evaluation criteria, i.e. relevance (often including the quality of design), efficiency, effectiveness, impact and sustainability (For definition, see Glossary of Key Terms in Evaluation and Results Based Management, 2002, and Glossary of Key Terms in Evaluation and Results Based Management - Second edition, 2022)

These are complemented by two INTPA-specific criteria: complementarity of assistance and EU added value. While some of the evaluation criteria examine the entire results chain and intervention logic, others focus on individual result levels (i.e. impact assessment and effectiveness, which revolves around the outcomes).

Evaluation criteria along the results chain