Handled carefully, large language models can transform research support, say Maéva Vignes and Lionel Jouvet
When you work in research management, you never know which of your skills and experiences are going to come in handy. As researchers, we both had a close-up view of the avalanche of data coming out of the sciences and quickly realised we needed to be data scientists as well as disciplinary experts.
One of us, Maéva, has a background in nanotechnology and in the design of educational board games. After joining the research support team at the University of Southern Denmark’s Research Innovation Organisation (SDU RIO) in 2017, she incorporated data science into her work. In 2020 she led the development of web apps using natural language processing to help researchers search for funding opportunities.
For his part, Lionel began his career as a marine biologist and helped to set up a makerspace in Odense, teaching everything from sewing and woodwork to computer-aided design and 3D printing. In late 2021 that role led him, via a fellow maker, to the GPT sandbox, an early chatbot powered by a large language model with impressive capacities in writing and describing computer code.
Lionel joined the data group at SDU RIO in mid-2022. Together, we quickly realised the great assets and potential risks of generative AI technologies in research support and coding. We were determined to stay ahead of the curve.
Magical but fallible
On 28 November 2022, OpenAI released the public version of ChatGPT. As people experimented with the bot over the Christmas holidays, two main impressions emerged: one, it was a remarkable tool capable of writing on any subject; and two, it often gave incorrect information.
This combination of magic and fallibility sparked a flurry of conversations among colleagues about the capacities, limitation and potential of LLMs and chatbots. Over the next few months, interest in generative AI for producing images also surged. Colleagues began achieving impressive results, and the buzz around LLMs grew louder within the academic community.
In June 2023, the growing curiosity and concerns spurred us to host a workshop, bringing together departments including administration, research support, legal and ethics. This became a weekly learning circle for testing tools such as Microsoft Copilot, OpenAI GPT, DALL.E, swapping ideas, sharing reservations, discussing tips and keeping abreast of the latest technology. In the past eight months, attendance has risen from seven to more than 40, and other departments have been inspired to start similar activities.
We organised ourselves to be flexible and reactive, building a network that covers ethical, legal, teaching, technical and user-related matters. Collaboration and knowledge flow, both vertical and horizontal, have been crucial to harnessing LLMs and navigating the mix of excitement, worry and scepticism within the university.
Everyday reality
Now LLMs are an integral part of our daily work. They have changed the way we approach tasks such as coding, summarising documents, brainstorming, workshop and presentation design, and application drafting. We have, for example, used these technologies to help researchers find European Research Council panel best suited to their application.
At the same time, we recognise that generative AI remains a wild west. Platforms come, go and change rapidly. Many cloud-based technologies are located outside the EU and its data protection rules. Concerns about intellectual property, bias and the ethical liberties taken in the training of LLMs shape our decision-making.
To address these issues, we prioritise data security and ethical considerations. We try to deploy models on the premises using either laptops or the university’s high-performance computing environment. And we focus on European platforms such as Mistral and open-source projects. By tailoring the technology to our needs and maintaining independence, openness and flexibility, we strive to avoid lock-in to specific platforms and align how we use LLMs with our values and regulatory requirements.
The rapid pace of LLM development has implications for the future of research management, including job security. As the wave gathers force, continued exploration and collaboration are crucial, as the recent AI Days organised by the European Association of Research Managers and Administrators have shown. Academic actors must play a proactive and leading role in defining and pioneering how AI is used in research and research management.
We would welcome readers and attendees at the Earma conference to share their experiences, concerns and ideas about LLMs. By working together, we can harness the potential of LLMs and build a future where AI serves as a powerful tool for innovation and progress across research.
Research Professional News is media partner for the Earma 2024 conference, held this week in Odense, Denmark.
Maéva Vignes founded and heads the data group in the research support team at the University of Southern Denmark’s Research Innovation Organisation. Lionel Jouvet is a member of the data group and an expert in the implementation of AI tools.
A version of this article also appeared in Research Europe