Updated: Jan 26
or: Why I am fantasising about standardisation and algorithms.
The NGO/non-profit world has told each other stories about its need for “digital transformation” or “digital maturity” for probably more than a decade now. Each year, the story is repeated with more urgency and sprinkled with some cases of organisations who “digitally transformed” their operations.
What is our role in that change, as monitoring, evaluation, and learning professionals? I would argue that despite the case studies featuring new mapping technologies, methods for field data collection and the invention of Tableau and Power BI, the M&E sector lags behind what would be possible and desirable in 2019. There is both a short-term/practical aspect to this argument, and a long-term/existential one.
The short-term, practical part of my argument boils down to this: even though the tools available have changed drastically and some organisations have experimented with them, the core practices and outputs of monitoring and evaluation remain the same. This is both the reason for and partly influenced by the fact that programme management practices have not made the “digital transformation”.
To assess the possibilities and progress better, let’s look at the different parts of M&E for social programmes:
a) Programme and M&E plan design: From my experience working with small charities, a grant funder, and now an international development agency in the UK, what is expected at design stage is a set of documents describing a theory of change, logframe, and related tools — all of them in narrative format, usually pdf-ed. That means that these tools are used in the same way as in the time when they were drawn up on paper — which obviously still makes them useful concepts, but they can only be read and understood by humans, not machines. Putting them on cloud-based storage systems in the exact same format can hardly be called genuine innovation or digitisation.
b) Data collection: the tools for data collection have improved significantly, with mobile-based and other automated surveys becoming more common. In some cases, there is also a different type of data being collected, such as geospatial data or video and audio data through mobile ethnography. In some cases international development has even dabbled in big data analysis. This stage has definitely seen some innovation.
c) Data analysis: tools such as Power BI and Tableau have made visual analysis much quicker and easier, therefore offering different people a different type of sensemaking than R or SPSS. This element could be highly impactful if skills in or at least understanding of the value of data analysis were more widely disseminated in development agencies.
d) Data/Evaluation use: better data visualisation tools make for prettier evaluation presentations, but beyond that, there is a large gap between the possible and the observed here. As far as I can see, evaluations still primarily come as written reports (even when they are utilisation-focused evaluations). That is the default despite the overwhelming evidence we all have that a report usually gets read once and then forgotten, mainly because the human brain is not wired to digest 50 pages worth of dense analysis and then find ways of applying it in contexts that are too complex to show similarities or differences with the evaluation content.
The end result of all four points above is this: while we have significantly more data (potentially) at hand, and much more advanced methods for analysing it, we still design and evaluate programmes as we used to 20 years ago. It looks like the M&E sector did its digital transformation only at “activity” level, but then forgot to think it through to output and outcome level. Our “technical” M&E activities, i.e. data collection and analysis, have seen an uptake in digital tools which likely made them better, but we start and end our work using conventional tools, likely with conventional outcomes.
This means that we are in effect not walking our own talk. If we are the sector within international development in charge of facilitating learning, and always looking for a better way to do what we are doing, then we need to a) think bigger when designing M&E plans, and b) use the tools out there to make our evaluations understood and utilised.
Here we come to point 2: the longer-term, global and ethical implications of a digitally “immature” M&E sector.
Technology already offers so many ways to do things that humans used to do better. Why are we not using tech tools for good — at least not as strategically, not as quickly and not as effectively as some people use these same technologies for questionable purposes?
Why do we have complete oversight over millions of people’s grocery shopping transactions, but we can’t track how expensive the achievement of certain outcomes is?
Why is it now relatively easy to influence election results by buying Twitter bots, but it is still just as hard to make a judgement on the outcomes of democracy support?
Why is it so easy for marketing professionals to get us to behave in accordance with advertisements, but we can’t get professionals in development agencies to behave in accordance with evaluation results?
We are the sector tasked with ensuring society changes for the better, and does so in an evidence-based way. Why are we so far behind the guys who effectively make society change for dubious agendas?
What could a digitally mature M&E sector look like?
I genuinely do not know, but here are a few premature ideas:
a) Programme and M&E plan design: I imagine a world where my colleagues in programme management sit down to design a new programme, in front of a screen with a simple form attached to a massive database. As they start framing the problems they are trying to solve, and the objectives they are hoping to achieve, the system automatically shows them evidence to guide them: in which places problems similar to theirs have been solved, what results have looked like in similar contexts, what types of activities have led to results in their region. Anything the database has only sketchy evidence on, or has reason to doubt based on updates to the context, is flagged as an opportunity for innovation, with contact details of experts who operate or research in that area. It probably shouldn’t be trusted with decisions, but it could make sure the humans have all relevant information in a digestible format — a computer is much better at remembering and keeping track of layers of complexity than humans.
d) Evaluation use: I imagine a world where theories of change in similar sectors and geographies are combined across agencies, and evaluation results are continuously fed into a system that compares ToC logic to observations. I imagine a world where evaluation reports have been replaced with VR experiences, end-to-end learning experiences, and data stories as convincing as the lies of populists. I imagine a world where any member of the public could ask themselves “how effective is organisation X at supporting democracy and human rights protection?”, and would be only a Google search away from a verifiable, easily understandable answer, based on data collected across all organisations in the sector.
This is why I’m fantasising about standardisation and algorithms. If we put enough effort into understanding how it can work, and enough investment into experiments with new ways of thinking about data, we can use computer power to make our results more powerful than ever. I’m fully aware of the limits of such undertakings — there is an important element of our work as M&E professionals that has to be transmitted as meaning through human-to-human methods — through stories, case studies, explanations. But that should be the main thing we do. It’s time for the M&E sector to let computers do what they are better at, and do only the work you truly need M&E experience to do.
So yes, the M&E sector is somewhat stuck in the middle of a “digital transformation”. And it is essential that it should make digital the default for its work, so that it can attend to the bigger problems facing our sector.