So…Did It Work? — Measuring the Impact of Open Data Projects

October 30, 2017

What impacts are you trying to achieve? How can you measure and assess it? And how can we do all of this better?

Hello everyone! If you haven’t been following our blog, we recently organised the Open & Shut Conference, where we brought together a range of open data experts and practitioners to discuss the challenges and opportunities posed by open data in closed societies.

This third article in a series of posts collecting our thoughts from the conference, is a summary of a discussion about assessing the impact of Open Data.

The last few years have been a pretty exciting time for open data. Groups and organisations the world over are using open data in an attempt to solve public problems and empower ordinary citizens. However, how effective has open data really been, and how do we measure its impact?

To kick off the conversation, we looked at Data4Change project.

Data4Change works with Human Rights Organisations in the MENA and West Africa region. We bring together groups and teach them how to collect, store and visualise data — addressing a long standing shortage of in house storytelling capabilities. Over a period of a few days, researchers, coders, UX Designers, graphic designers and data visualisation experts get together with HROs to create data visualisations and other innovative strategies aimed at elevating public engagement and effective advocacy, which in turn have the potential to bring about positive change.

When assessing the impact of these workshops, we have a clear goal to achieve — imparting skills on human rights defenders, so they can use data to tell stories, whilst also producing an advocacy-focused campaign using said skills. If these goals are achieved, then we can claim positive change, and thus that the course had a good impact on the community, right? But should that really be where impact assessment ends, or should it go further? Yes, there is a product that they have created, but are these individuals and organisations going to continue using these skills, and are they going to continue to have a positive impact on their work?

Context Is Everything

This is where the group agreed that context became important. For example, a project focusing on making data available on the HIV testing rate in location ‘x’, would be judged to have had impact if there was then a reduction in HIV rates in those areas. So in this case, whilst raising awareness is definitely considered an outcome, the overall impact of a project is the fundamental change.

In that sense, if the specific aim of a project was to raise awareness, then assessing impact can be more difficult. Some in the group suggested that the number of ‘clicks’ on an article, or the reading rate, could be a good indicator. But even this produced problems — while for one project, success may be seen as thousands of clicks, as much as 30 clicks could be considered a success for other projects. This is where it becomes important to establish the audience of a report; a report aimed at policymakers, or a specific group of people within society, would not necessarily require a high click-rate to be deemed a success.

Quality vs. Quantity

This sparked conversation over the specific tools and methods organisations — particularly donors — use to assess projects.

One contributor in our group talked positively about the use of qualitative measures of impact, using participatory story gathering as an example. This method involves the project team discussing the changes that they’re noticing, and adding changes as they see them happening — in turn conveying change cumulatively, so you see change unfold as it happens.

Although another participant raised concerns that participatory story gathering could open project managers up to being trapped between correlation and causation, and that reliance on qualitative measures could be vulnerable to confirmation bias, others asserted that engaging in a systematised and collaborative approach to story gathering can in fact offer valuable opportunities for rigorous critical reflection within a project team. On top of that, by designing impact assessment to take account of emergent outcomes, it allows an implementer to respond to them in a more agile way, thus giving more room for movement and improvement in project management.

We all agreed, however, that quantitative methods are far from outdated or useless, it’s rather that combined methodologies work best — providing us with both the bigger picture (quantitative) and the more nuanced, but just as important human impacts (qualitative).

Unanticipated Impacts

So far we’ve been talking about impact we want to see. What about unanticipated impacts? One participant mentioned that in Iran, there have been cases of unanticipated impact that you can only hear through small individual stories. When a few journalists looked at what they thought would be a small story on childbirth and child marriage, the results of the project were huge! Yet there was no quantitative way to measure how the results were huge — only someone who had been following the news for years could see how big the impact was. Thus, as above, anecdotal evidence, either from people in the know, or people on the ground that experience change, is important.

But unanticipated impact isn’t always positive. This is where the group agreed that it is important to think about the possible negative impacts of your project, before you set it up. One participant noted that in Myanmar, for example, a project perceived as being ‘foreign’ could be seen as biased and/or fake. This in turn produces more difficulties, as domestic institutions do not necessarily have the capacity to produce impact-driven research and projects. So the same project, done in the same way, but by two different organisations, could have completely different impacts.

In addition to this, the group agreed that open data was oversold when it first came to the fore. With this came over-investment, and with that over-investment, came broad projects that didn’t think very far. People started publishing troves of data for the public, but no-one thought beyond that, and it wasn’t getting used. Added to that, over-selling can lead to donors pushing for things that are bigger than is actually possible. Therefore, it is important that for one, donors do not push for unrealistic goals, but also that project managers do no over-hype projects, as this can lead to a lack of tangible impact.

Thus, instead of expending time and energy on short-sighted efforts, it is important that we work over a longer period and invest in community engagement, thus making it easier to observe and measure impact. Without doing this, we run the risk of deploying weaker open data projects, and further diminishing government and donor enthusiasm for its applciations.

Make Open Data Impactful Again!

One participant asked if we could foresee something in that future that would allow open data to put itself back on the political agenda once more. Using the UK as an example, they pointed out that the country combatted its over-investment in open data back in 2012 by gradually implementing new systems to make existing open data resources more useable and accessible.

Essentially, we concluded that it’s about pushing governments and organisations to see the usefulness of open data again. For example, a participant pointed out that the US relies heavily on survey data (in some cases landline-only!) to undertake economic measurements, but then realised that open data practices could help streamline the process. They now allow the Internal Revenue Service (IRS) to share data to fix the information flows of the administration. People in the government there see data sharing as an opportunity to fix data architecture.

And then there’s the case of Innsyn.no. It was set up by two reporters focusing on local government. They made freedom of information (FOI) requests to get the bulk of the data that went through their local government office, doing this every week, and compiling and adding metadata. It later transpired that the government was actually using the tool as well, as it worked better than what they were doing themselves!

So, what we learned from this session is that assessing the impact of data-led projects is no straightforward task. It takes forward planning, realistic goal-setting, context-based analysis and for people to advocate for data’s importance. No single impact assessment can work for everything, but it is important that we figure out what works and where.