Dr Peter Whitton, Senior Academic Development Manager, Durham Centre for Academic Development, Durham University and ChatGPT 3.5, Generative AI chatbot, OpenAI
Generative AI seems to be everywhere, causing division and distrust creating an abundance of spurious images and text on our social media platforms, destabilising higher education assessment systems and enraging newspaper editors who warn us that “AI will take our jobs”.

It would be easy to discount these worries as ill-informed scaremongering, however for those involved in research leadership, they highlight legitimate areas of concern. How will AI influence the recruitment and retention of high-quality researchers, the production and dissemination of research, research ethics and accountability, research employment, and how researchers communicate their ideas to policy makers, publishers and the wider public?
Researchers have a long history of using AI to assist their workflow and as tools have advanced, it has enabled researchers to create and manage knowledge more efficiently and effectively in a diversity of fields (Groenewald et al, 2024; Rajpurkar et al., 2022; Kaack et al., 2022) and on a scale, greater than human endeavour alone could achieve. The recent availability of broad generative AI tools such as ChatGPT and specialist ‘research-focused’ tools such as Elicit, Research Rabbit and Scite have enabled researchers to automate time-consuming tasks such as data analysis, literature reviews, and drafting research papers.
In this article I argue (with the help of my co-author ChatGPT 3.5) that the availability of generative AI tools may be changing our relationship from one of ‘researcher using AI’ to that of ‘AI as researcher, collaborator and co-author’. ChatGPT and I refer to a series of recent conversations we had about the challenges faced by, and the desirable characteristics of, future research leaders in a world where Generative AI tools are ubiquitous.
AI and its effect on research employment
Media stories about AI often reference the disruptive effect that these technologies may have on job security. Some research has argued that AI may replace repetitive and mundane work leaving space for more creative opportunities (Guliyev, 2023).
ChatGPT suggests that: Some roles, particularly those focused on repetitive or foundational research tasks (e.g., literature reviews, basic coding), might decline in demand, and that this may in turn lead to a reduction in entry level research positions.
AI’s capacity to execute (some) cognitive tasks may suggest that white-collar occupations in particular professional, scientific, and technical roles are at greater risk of disruption (Dahlin, 2024; Department of Education, 2023). Dahlin (2024) also highlights that workers from minoritised groups and those in junior or precarious positions may (justifiably) feel most vulnerable. Despite considerable work across the sector to improve research culture, address research precarity, and acknowledge structural biases – precarity is still pervasive among postdoctoral researchers, with women and minority groups being more vulnerable to its adverse effects (OECD, 2021). This potential job displacement or replacement may create additional challenges for research leaders working to address workplace inequalities and create diverse and sustainable research talent pools, while at the same time exploiting the potential advantages that AI may hold.
AI as a research collaborator and co-author
Some ‘research focused’ Generative AI tools highlight AI’s potential to streamline research workflow and boost researcher productivity, efficiency.
ChatGPT suggests that: Generative AI may be useful in… data processing and analysis; literature review and information gathering, hypothesis generation and experimental design.
Elicit AI’s marketing material specifically highlights its ability to “find themes and concepts across many papers”, it could be argued that, if used in this way, the AI is moving beyond a role as useful assistant and is encroaching on the more creative and intellectually demanding areas of research work (i.e. the fun bit). Recent papers have suggested that Generative AI, where it has been used to produce new insight, should be credited as a co-author on academic outputs (Osmanovic-Thunström and Steingrimsson, 2023; Polonsky and Rotman, 2023).
However, academic publishers have distanced themselves from this idea with several journals (e.g. Nature, JAMA) specifically prohibiting AI co-authorship and insisting on human accountability for submitted text. Future AI developments may mean that researchers (and research leaders) are relegated to little more than a gatekeeper role, scrutinising and signing off fully automated research outputs.
Science fiction?
Perhaps, however Japanese startup company Sakana.ai, are developing a product called AI Scientist which is a fully automated pipeline for end-to-end research paper generation, it can perform idea generation, literature search, experiment planning, experiment iterations, figure generation, manuscript writing, and reviewing.
In the US the Lawrence Berkeley National Laboratory A-Lab combines robotic automation and artificial intelligence to create and analyses samples of new chemical compounds (see also Szymanski et al, 2023). Their website boldly states “A-Lab is designed as a ‘closed-loop,’ where decision making is handled without human interference” (Biron, 2023).
ChatGPT suggests that: although fully automated research is plausible in certain high throughput, data intensive lab research. Human insight would still be required in many domains particularly for nuanced understanding, bias checking, interpreting unexpected results and where value-based decisions are crucial.
What does this mean for future research leaders?
Our relationship with AI is in its infancy. As these technologies become more intuitive, mainstream and potentially ‘human like’, will collaborative AI/Human research projects become the norm? How will researchers work with their AI counterparts? and how will this relationship be mediated by those who lead research?
Chat GPT suggests that: research leaders must take a more strategic role and facilitate AI literacy in the teams and projects that they manage.
Beyond this, research leaders need to be acutely aware of the fast-changing AI research-literacy agenda, both for the staff they manage and their own CPD. It is important that entry level research positions remain available, and that job descriptions, career paths and mentorship opportunities reflect the need for highly skilled junior researchers from all backgrounds. AI may open up further collaboration and interdisciplinary working opportunities, and research leaders need to be mindful that not all prospective research partners may have equal access to AI tools.
It is envisaged that research leaders will play a full role in developing AI policies (locally, nationally and internationally) and are proactive in driving forward responsible, ethical and sustainable innovation. As research gatekeepers, leaders must be able to make informed, balanced judgements about projects which factor in the social and environmental implications of AI use (UN Environment Programme, 2024) and are able to weigh these up against potential benefits of efficiency.
Leave a comment