Official measures of research ‘impact’ are failing to keep pace with socially-networked academics
A survey of how academics use social media to encourage people to interact with their research argues that much of the public value of their work is probably being overlooked in official ‘impact’ assessments.
The study analysed how more than 100 academics discuss and encourage the uptake of their scholarship on social media. Based on the usage patterns it uncovered, it suggests the approach to assessing universities’ public ‘impact’ enshrined in the Research Excellence Framework (REF) may be due an update, since academics are now more socially networked than they were when the model was devised.
The REF is the official system for measuring the quality of university research in the UK and informs the distribution of research funding. Following the completion of the latest assessment round this month, the next REF cycle is due to finish in 2027.
Among other requirements, the REF asks university departments to demonstrate the ‘impact’ of their work on society. While the new study is supportive of the principle of requiring ‘impact case studies’ – which aim to capture the ways in which research can enrich society – it questions the current approach to assessment. It argues that a gulf is opening up between how impact is measured in the REF, and the true scope and range of scholarly engagement on social media platforms – some of which did not even exist when it was first devised.
In particular, the present model focuses on the extent to which the final outputs of completed research projects are received and absorbed by public audiences. Contrastingly, the study found that today’s academics are often engaged in ongoing ‘feedback loops’ with organisations, community groups, policy actors and others, during a project’s lifetime. These not only generate outputs at the end; they also lead to opportunities to collaborate and share expertise while the research is still underway, often in ways that the REF is unlikely to cover.
The study’s author, Dr Katy Jordan, from the Faculty of Education at the University of Cambridge, said: “The official language presents impact as a top-down, outward flow from universities to a waiting public, but this is an outdated characterisation – if it was ever valid at all. Ask researchers about their most impactful interactions on social media, and you’ll get a much wider range of examples than the REF covers.”
“You could argue that this means too many researchers are misunderstanding the process; but it’s also potentially evidence that times have changed. There’s a huge amount to be said for asking universities to demonstrate their value to wider society, but it may be time to rethink how we measure this, given what social media has made possible.”
The REF measures impact through two principal dimensions: ‘significance’ (the meaningful difference a project makes) and ‘reach’ (the quantifiable extent to which it does so). The definition of impact beyond this is very open-ended, is often considered ambiguous, and varies widely across academic disciplines.
The REF framework presents a somewhat contradictory picture of public engagement, encouraging interactive engagement in the guidance while discouraging it in the assessment metrics. Official guidance states: “Engaging the public with research does not count as impact. Impact is what happens when people interact with the research, take it up, react, or respond to it. Public engagement doesn’t just happen when the research is complete.”
Jordan’s survey invited academics to provide examples of strong impact they had achieved through social media. She received responses from 107 scholars in 15 different countries, but most of the participants, who ranged from postgraduate researchers to established professors, were UK-based. Her research analysed 209 of the examples they submitted.
Significantly, fewer than half related to cases in which research had been disseminated ‘outwards’ to the public, as products, in the way the REF presumes. In such cases, the academics had typically used social platforms to share their findings with a bigger audience, to stimulate discussions with colleagues, or to generate evidence of positive engagement with the research for potential use in REF case studies.
About 56% of the responses, however, spoke about impacts arising from incoming exchanges on social platforms: in particular, after the researchers had used social media to test out ideas, report interim findings, crowdsource information and data, or advertise for research participants.
These discussions appear to have generated more than just ‘impact’ but instead reflect a wider range of forms of ‘public engagement’. As a result of the exchanges, researchers were invited to give public lectures, participate in panel discussions, give evidence and advice to organisations, or run training sessions.
Crucially, these opportunities did not always focus on the research that had stimulated the interaction on social media. Many were subsequently asked to share their broader expertise – for example, with advocacy organisations or policy-makers who were interested in finding out more about their research field in general. For example, in one case, a post on social media led to a senior civil servant from the Cabinet Office visiting both the researcher and several of his academic colleagues, to explore how their work as a whole might inform and shape policy.
Jordan argues that social media is blurring the distinction between ‘impact’ and ‘public engagement’. As information flows into academic projects – from people, companies and organisations who are contributing ideas, questions and feedback through social platforms – so these generate both formal and informal opportunities for ‘outward’ exchange. This circuit of interaction may well be influencing and benefiting society in multiple ways not tracked by the REF.
Part of the problem facing assessors is that these more nuanced impacts are difficult to monitor or measure. “One solution may be to amend the assessment so that it asks universities not just to provide evidence of research outcomes, but to explain the research process across a project’s lifetime,” Jordan said. “This isn’t a call for yet more ambiguity about what impact is, but for more open-mindedness about what researchers achieve. In an increasingly complex, socially-networked culture, this would help to ensure that the broader effects of their work are not forgotten.”
The research is published in Learning, media and technology.