Sign in

Africa Evaluation Blog: Rethinking Success in M&E

Mokete-Mokone.jpg

MoKete Mokone

Posted 1 year ago

Measuring programme effectiveness should allow for an opportunity to reflect on results that are not primarily intended by an intervention. The expectation that evaluation reports will conclusively establish the outcomes or impacts of an initiative might not be realistic. Perhaps we need to rethink what ‘success’ should look like, and consider what alternative perspectives might be available to guide us. In actual fact, both those who commission and those who respond to terms of reference do so with anticipation that an evaluation will confirm the strength of a theory of change.

But what happens when an evaluation is not able to prove the effectiveness of a programme? Should we not have other indicators of success that can be measured to assess impact?

In this discussion I highlight some ideas to ponder when evaluation commissioners and practitioners negotiate the type of results to be investigated, while still making the process part of learning and improving the sustainability of programmes.

We need to value and use monitoring

Monitoring should be used to track what is happening, provide a quick response mechanism, and make remediation possible during implementation. Instead of waiting until the end of programme implementation to make critical decisions, there is a feedback loop that should be fully utilised. Monitoring has become synonymous with data collection and reporting, but frequently neglect the analysis stage. As a consequence of funding requirements, organisations merely collect data and complete various templates for funders without necessarily understanding themselves the likelihood of success and the early warning indicators.

So in the rush to be compliant, we often make the error of underestimating the value of monitoring data while missing material information about programmes. Monitoring data should actually be used as a first indicator of whether the programme is on track, and allow organisations to take remedial action to improve or add a supplementary component to allow it to be (more) effective.

This means that the same vigour and intelligence should be applied in both evaluation and monitoring concerning what type of programme data gets collected and analysed. Such measures also mean that programmes can negotiate early on with funders what results they will be able to realise and what success will look like; given how the programme unfolds over time. Instead of waiting for a final evaluation that will determine whether funding will be cut or not, or whether the programme should continue.

Organisations should have regular monitoring reports that they use to make sense of their data. This would also address the challenge of commissioning evaluations at the end of the programme and then realising that key information is not available or is insufficient. Regular feedback on trends, forecasts and progress should be part of monitoring, which would make for better implementation. For example, if a programme collects information about the success of the candidates who completed their training, yet does not make an effort to understand the intake of candidates, the issues raised during course attendance, and reasons for drop-outs, they will not have a full picture of the merit of implementation processes, and the likelihood of success.

In short, we should focus more on adaptive management.

We need to rethink ‘failure’

One of the most difficult traits of being human is the inability to embrace failure; this is even more true for organisations. When we design programmes, we make assumptions about how they will work and what results they are likely to achieve. When this is not confirmed, we discard the entire process as a failed attempt, while seeking to diagnose the most important design or implementation weaknesses.

In the process, we miss a valuable opportunity to learn and improve current and future programmes. The word ‘failure’ – especially for public and donor funded initiatives – would mean wasteful expenditure, with blame assigned to unprincipled officials. Instead, we can take a wholesome view and begin to take stock of lessons that can be learned.

Rethinking failure would mean assessing the ‘learning experience’ in order to improve own understanding, processes and systems. Even when a programme has not realised its intended results, there is value in assessing what can be learned. And, in fact, unintended positive results might actually mean success, even if they are different to what was intended. Because a programme is multi-dimensional, it is not merely about producing “this and that”. It can also be about the immediate outputs that have been realised, and how next steps can be taken to make the results (more) sustainable.

For example, consider an educational programme that is aimed at exposing young people to entrepreneurship, and supporting them to start their own businesses. The programme may fail to realise those new businesses, but impart skills about financial management, personal development, branding and risk-taking. An alternative approach to evaluating this would be to look at how these skills might have changed, or have the potential to change the lives of the young people – and consider what should be done next to help them get to a next level of development.

Good lessons may also emerge when assessing for example any partnerships and networks that were established, or how the programme was able to deliver at output level – that is, it is not always necessary to focus on impacts.

We should consider unintended positive consequences

It is quite normal that some programmes will not realise the results that they have been funded to achieve. However, as mentioned, this is not necessarily a bad thing. There are other key indicators of success that can still be measured, even if they have not been contracted with the funder. These may still offer valuable information about the impact of the programme beyond what has been funded. This idea is discussed at length in Jonny Morrell’s book which looks at uncertainty and unintended consequences, and advises evaluators to accommodate these surprises as part of their assessments.

I agree that it is important to link the results with what has been funded. However, this does not mean it has to be done in isolation of other factors. There are spin-off effects of programmes that often do not make it as part of evaluations, either because they are too broad or the link is not quite clear. There is still a case to be made for the unintended positive consequences of programmes, which are often not identified or given the credit they deserve.

This does not mean that if programmes do not meet their actual targets we should go on a wild chase for any positive ‘story’. On the contrary, it is about taking account of the contribution of the programme to outcomes or impacts that were not expected. This approach seeks to identify the role of the programme among other key activities, and to actually assess the level of impact even though some expectations may not have been fulfilled.

For example, a programme aimed at job creation among rural youths might enable the majority to start their own business. A study of unintended positive consequences will take account of the strengths of the training, the newfound confidence and resourcefulness of candidates, and access to finance opportunities – in addition to, say, expected improvements in the ability of the candidate to improve their own and their families’ material standards, and access to better social services such as health, childcare or transport.

In summary

Evaluations should be used to assess the full impact of interventions. The challenge will always be to find the right balance between evaluating the direct results of the programme (which have been funded); and evaluating broader systemic contribution of the programme (which have not been funded) and determining which one tells a convincing story of success. Or maybe it can be a combination of the two and not necessarily a single approach. There should be an overall intention to seek to understand how a programme contributes to better outcomes and made positive impact which can be direct or universal.

Write a comment

To comment you must be registered and logged in.

Comments

No comments posted yet.