“Early intervention” on ethical risks of AI in research

29/01/2024

THE RAPID advancement of Generative AI and Large Language Models (LLMs), such as ChatGPT, Bard, and Claude, presents researchers with exciting opportunities for innovation and efficiency. Generative AI can assist researchers in many ways, from designing data collection tools to generating survey responses and data cleaning to analysis and reporting. As with any new tool, however, it needs to be used responsibly.

A new 10-month project led at the University of Strathclyde aims to help researchers and their institutions make informed decisions on how they use Generative AI with participant data, to protect the privacy of the essential people who participate in research. As part of the work, they will gauge the views and concerns of University Research Ethics Committees around the UK.

The project has been awarded £100,000 funding from REPHRAIN, the National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online, and is in collaboration with the University of Edinburgh.

Professor Wendy Moncur of Strathclyde’s Department of Computer and Information Sciences, who is leading the project, said: “Generative AI capabilities are impressive and can save researchers time and give new insights. We will help researchers and their universities to foresee and avoid potential pitfalls in its use.

“These pitfalls include participant re-identification, where we have promised study participants that they will be anonymous yet Generative AI undoes our anonymisation and reidentifies them. Another potential pitfall is when we ask Generative AI to make up extra data based on participant data that we already have, and it ‘hallucinates’ – makes up – misleading or even defamatory information about people.

“Our aim is to enable UK universities to exploit the incredible potential of Generative AI, while protecting participants’ privacy and the excellent quality of UK academic research, by understanding and guarding against potential pitfalls.”

The research aims to help guide research institutions, University Research Ethics Committees, regulatory authorities, funders, including REPHRAIN itself, data custodians, professional organisations, publishers, and advocacy groups in their early encounters with research involving Generative AI.

The project is informed by the UK Government’s Futures Toolkit, a resource that policy professionals can use to embed long-term strategic thinking in the policy and strategy process.

The Latest Stories

Robert Gordon’s pupils learn from tech experts at Google in New York City
In these two Scottish cities, commuting is cheaper than using your own electricity to WFH
Boardroom leadership needed to manage AI risksand drive trust
AAB’S restructuring & recovery team grows to support business in challenging times