20150119How much do I actually spendContentAs evaluators we are continuously asked how much should be spent on monitoring and evaluation (M&E). Although 3-5% of program budget is frequently suggested as a benchmark, actual spend on M&E should always be linked to purpose. It’s important to understand the right type of information that is needed to influence the right type of target audiences at the right times.

The variable capacity of organisations to plan and budget for each of the different elements of M&E results in systems that are often not fit-for-purpose or are woefully under resourced. Instead of ensuring that there is sufficient capacity to collect, store, analyse and share learning, M&E data tends to be analysed on an ad hoc basis and is seen mostly as an accountability mechanism.

Beyond donor accountability, M&E should be considered as an opportunity for organisations and their partners to improve program delivery, win over reluctant decision makers and possibly leverage additional interest and investment for scaling-up or replicating activities.

Trying to retrospectively adapt program budgets to M&E needs doesn’t work, so it is essential that M&E planning and budgeting is completed during a program’s design phase.

It is accepted that programs will evolve between the design and inception phases, however simply allocating 3-5% of program budget and setting up M&E systems retrospectively is not effective and divorces M&E from learning.

Our Solution

Currently, many organisations are not able to identify and resolve gaps in their M&E capacity across the different elements of monitoring, evaluation and learning. We work with clients to first define the purpose of their M&E by identifying who will be using the information and what level of evidence is needed to persuade them.

Evaluation questions should directly respond to the purpose of the evaluation. Many organisations tend to spread their M&E budgets too thinly by trying to answer a multitude of questions without considering relevance or resourcing. It is best practice to answer fewer questions in more detail.

Once the evaluation questions are selected, it is then important to select the most appropriate research methods. There is no gold standard as all research methods have trade-offs and cost implications. Instead, research methods should always be judged by first weighing if the quality of evidence produced by these approaches will convince target audiences.

Finally, clients need to identify gaps in their capacity and the capacity of their partners to deliver the different M&E components. Specifically, organisations should make sure they have considered the different aspect of collecting, validating, storing, analysing, applying and sharing information to ensure that evaluation questions can be answered.

The unifying thread throughout the entire M&E review process is Coffey’s examination of program budgets. If there are important gaps in M&E capacity, then Coffey advisors provide tailored support to ensure that these are addressed, including ensuring that survey sampling points are appropriately distributed, making sure data collection methods and staff are suitable and ensuring that there are systems for validating and safe guarding often sensitive beneficiary information.

By thinking through the entire monitoring, evaluation and learning process from the design stage, organisations will be able to develop budgets that are tailored to their M&E objectives and are realistic in light of existing capacity.

How we can help

We encourage organisations that have questions about how to systematically plan and budget for M&E to speak to one of Coffey’s evaluation and research consultants. We are more than happy to discuss any questions and to also signpost organisations to useful publically available resources.

For more information please contact one of our evaluation consultants, Peter Mayers at Peter.Mayers@coffey.com.

To view a PDF version of this insight, click here.