Minds Turning Towards Evaluations & SPF

Introduction

It hardly seems like 5 minutes ago since we were all involved in working out what the Shared Prosperity Fund would mean for us all, and we grappled with programme expenditure profiling, priority activities, and Enumbers. Now, with no certainty on what is coming next, the time has come to start thinking about economic evaluation.

Evaluation is often seen as the Cinderella of economic development. The money has been secured, exciting projects have been announced, and, for the most part, delivered. Evaluations can feel like re-reading your homework, but they are an essential part of future funding bids – those that can demonstrate delivery, value for money, and lessons learned are likely to be in the prime position for future schemes. They are important, and when completed effectively, they can be an essential evidence base.

Importance of Evaluation

The Magenta Book is the lesser-known sibling of the Green Book. It sets out the best practice approach to evaluation—how to mark the homework created following the Green Book model when the funding bid was launched. If best practice is followed, the approach even revisits the rationale, objectives, and options appraisal created during the last bid. This revisiting is essential in framing the evaluation – it can be tempting to race towards outputs and outcomes – but there is a need to consider the assessment regarding the thinking in place when the programme started. Was the rationale, objectives, and even choice of option valid in hindsight?

Of course, this is economic development, and there is a need to try to measure the effect of a programme in comparison to what might have happened anyway. This is the area in which most literature concerning evaluations is focused. How can additionality be captured over the counterfactual, and how might we measure value-for-money as a test against what was predicted at the outset?

There are several ways of establishing a counterfactual. In straightforward evaluations, these tend to boil down to three main approaches – testing beneficiaries over their views of the importance of an intervention in the change in their subsequent behaviour; difference of differences where the results in a similar group without intervention can be compared to those supported; and synthetic control methods where large population data can be compared to an area of focused intervention. These are not the most sophisticated approaches – randomised control trials and creating differences in experimental design can be used (amongst others) but tend to need to have been established at the start of the programme.

Summary

All evaluations tend to focus on the final numerical results, and of course, the qualitative aspects are important, but all good evaluations consider three further aspects.

Firstly, an effective qualitative process of interviewing funders, stakeholders, and participants can be used to revisit whether the approach was the most appropriate and how well implementation took place.

Secondly, an imaginative use of graphics, a wider-ranging view of outputs, outcomes and benefits, and case studies that tell real-world stories of the work can bring an otherwise dry economic evaluation to life.

Finally, and most importantly, what has the work told us about the approach we might take in the future?  All of the work on evaluation is for nought if we cannot turn the findings into exciting opportunities to deliver our work more effectively in the future.

Evaluations can, therefore, be embraced as offering new insights and interventions. In this way, they should be seized as the gateway to exciting new beginnings.

Mickledore is experienced in delivering many different types of evaluation. If you think this may benefit you, contact Mickledore at nwilcock@regionaldevelopment.co.uk to discuss it.

Scroll to top