The British approach to assessment is colonising the world—with mixed results, says Marta Wróblewska
The impact agenda introduced in the UK with the 2014 Research Excellence Framework did much more than revolutionise domestic research evaluation. Like many developments in British academia, it was keenly followed, and often emulated, by policymakers in countries around the world. Among these are Norway and Poland.
Between 2015 and 2018, Norway experimented with impact case studies modelled on the REF. It took a light-touch approach, not tying the exercise to funding as the UK did. Poland, in contrast, has gone all-in, linking impact to funding. Review panels are currently working on the country’s national research evaluation, following the 15 January deadline for submissions. Among many new and revamped elements are an impact assessment closely modelled on the REF.
Similarities with the British model include the use of case studies, the template for recording them, and even the language, as impact case studies have to be written in both Polish and English. But there are also important differences, some of which may undermine the usefulness of the exercise.
The first mention of impact in Poland appeared in a 2016 white paper on innovation published by the Ministry of Science and Higher Education. This made explicit reference to “following the example of Great Britain” on using evaluation to identify impact. This became fact in the 2018 Law on Higher Education.
Academic evaluation
Poland has a long tradition of research evaluation, with national assessments roughly every four years since 1991, and a century-long tradition of systematic reflection on the nature of scientific work. Academic evaluation is a strong research field, and always a hot topic among academics. But perhaps because the new law covered several important areas—from doctoral education, to career progression, to evaluation—the impact element received little attention.
Ministry staff later expressed surprise that academics did not pick up on impact at all in consultations. As a result impact evaluation was introduced without a proper debate, almost by stealth.
The ministry commissioned a pilot evaluation but the results were ambiguous and the final evaluations of the case studies were never made public. So while UK institutions were able to prepare for impact’s inclusion in REF 2014 through many rounds of consultations, and in Norway the exercise itself was run as a trial, in Poland impact evaluation came abruptly.
Even towards the evaluation deadline, it seemed like many faculty committees and deans were busy preparing for the traditional outputs component, only realising at the last moment that an impact case study would have a bigger effect on a unit’s score than any individual publication. This is partly because the ministry’s confusing decisions about assessment of journal publications kept scholars focused on—and angry about—something else.
Impact accounts for 20 per cent of each unit’s final score. In SSH ‘quality of research’ (outputs) accounts for 70 per cent, and the amount of funding brought in through research grants or commercial services accounts for the remaining 10 per cent.
While official documentation does not define impact clearly, various fragments point to a broad interpretation close to the British one. The Polish evaluation model, however, is very literal.
Impact consists of reach and significance, like in the UK, but both have been clearly defined, with the documentation setting out the criteria for assigning a score, and given precise weightings, with each accounting for half of the final score. Also, reach is defined geographically—only international reach can receive maximum points.
Finally, some differences between Poland and the UK are simply odd. For instance, Poland has a separate route to submit additional case studies based on ‘excellent monographs’ or ‘biographical dictionaries’, and a case study’s score can be boosted by 20 per cent for interdisciplinary work. At the last minute, the word count in case studies for underpinning research was increased, suggesting an unexpected focus on research quality.
From an almost verbatim copy of the British version, the Polish concept of research impact has morphed into a whole other beast. The impact case studies have already been made available through a searchable database; along with a points score, evaluators will return descriptive feedback. But because of the sudden way in which impact evaluation has been introduced in Poland—without debate or adequate support—impact remains poorly understood. This is reflected in the uneven quality of case studies: having read a few dozen, I don’t envy the evaluators.
Marta Natalia Wróblewska is in the Institute of Humanities at the SWPS University of Social Sciences and Humanities, Warsaw
This article is the first in the Political Science ‘New Voices’ series, aiming to showcase early career researchers and present a broader variety of views and perspectives on research culture.
This article also appeared in Research Europe