Safety of advanced AI under the spotlight in first ever independent, international scientific report


NEW research supported by over 30 nations, as well as representatives from the EU and the UN, shows the impact AI could have if governments and wider society fail to deepen their collaboration on AI safety, as the first iteration of the International Scientific Report on the Safety of Advanced AI is published today. Launched at the AI Safety Summit, the development of the report was one of the key commitments to emerge from the Bletchley Park discussions, coming as part of the landmark Bletchley Declaration.

Initially launched as the State of Science report last November, the report unites a diverse global team of AI experts, including an Expert Advisory Panel from 30 leading AI nations from around the world, as well as representatives of the UN and the EU, to bring together the best existing scientific research on AI capabilities and risks. The report aims to give policymakers across the globe a single source of information to inform their approaches to AI safety. 

Today’s report recognises that advanced AI can be used to boost wellbeing, prosperity, and new scientific breakthroughs – many of which have already been seen in fields including healthcare, drug discovery, and in how we can tackle climate change. But it notes that like all powerful technologies, current and future developments could result in harm. For example, malicious actors can use AI to spark large-scale disinformation campaigns, fraud, and scams. Future advances in advanced AI could also pose wider risks including labour market disruption, and economic power imbalances and inequalities. 

However, the report highlights a lack of universal agreement among AI experts on a range of topics, including both the state of current AI capabilities and how these could evolve over time. It also explores the differing opinions on the likelihood of extreme risks which could impact society such as large-scale unemployment, AI-enabled terrorism, and a loss of control over the technology. With broad expert agreement highlighting that we need to prioritise improving our understanding, the future decisions of societies and governments will ultimately have an enormous impact. 

Secretary of State for Science, Innovation, and Technology, Michelle Donelan said:  

“AI is the defining technology challenge of our time, but I have always been clear that ensuring its safe development is a shared global issue. When I commissioned Professor Bengio to produce this report last year, I was clear it had to reflect the enormous importance of international cooperation to build a scientific evidence-based understanding of advanced AI risks. This is exactly what the report does.

“Building on the momentum we created with our historic talks at Bletchley Park, this report will ensure we can capture AI’s incredible opportunities safely and responsibly for decades to come.

“The work of Yoshua Bengio and his team will play a substantial role informing our discussions at the AI Seoul Summit next week, as we continue to build on the legacy of Bletchley Park by bringing the best available scientific evidence to bear in advancing the global conversation on AI safety.”

This interim publication is focused on advanced ‘general-purpose’ AI. This includes state of the art AI systems which can produce text, images, and make automated decisions. The final report is expected to be published in time for the AI Action Summit which is due to be hosted by France, but will now take on evidence from industry, civil society, and a wide range of representatives from the AI community. This feedback will mean the report will keep pace with the technology’s development, being updated to reflect the latest research and expanding on a range of other areas to ensure a comprehensive view of advanced AI risks.  

International Scientific Report on the Safety of Advanced AI Chair, Professor Yoshua Bengio, said:  

“This report summarizes the existing scientific evidence on AI safety to date, and the work led by a broad swath of scientists and panel members from 30 nations, the EU and the UN over the past six months will now help inform the next chapter of discussions of policy makers at the AI Seoul Summit and beyond.

“When used, developed and regulated responsibly, AI has incredible potential to be a force for positive transformative change in almost every aspect of our lives. However, because of the magnitude of impacts, the dual use and the uncertainty of future trajectories, it is incumbent on all of us to work together to mitigate the associated risks in order to be able to fully reap these benefits.

“Governments, academia, and the wider society need to continue to advance the AI safety agenda to ensure we can all harness AI safely, responsibly, and successfully.”

Prof. Andrew Yao, Institute for Interdisciplinary Information Sciences, Tsinghua University, said:

“A timely and authoritative account on the vital issue of AI safety.”

Marietje Schaake, International Policy Director, Stanford University Cyber Policy Center, said:

“Democratic governance of AI is urgently needed, on the basis of independent research, beyond hype. The Interim International Scientific Report catalyses expert views about the evolution of general-purpose AI, its risks, and what future implications are. While much remains unclear, action by public leaders is needed to keep society informed about AI, and to mitigate present day harms such as bias, disinformation and national security risks, while preparing for future consequences of more powerful general purpose AI systems.”

Nuria Oliver, PhD, Director of ELLIS Alicante, the Institute of Humanity-centric AI

“This must-read report – which is the result of a collaborative effort of 30 countries – provides the most comprehensive and balanced view to date of the risks posed by general purpose AI systems and showcases a global commitment to ensuring their safety, such that together we create secure and beneficial AI-based technology for all.”

This year promises to be an important 12 months for the technology, as increasingly capable AI models are expected to hit the market. The speed of AI’s development is one of the several areas of focus for today’s report which notes that while its recent progress has been rapid, there is still considerable disagreement around current capabilities and uncertainty surrounding the long-term sustainability of this pace. 

The UK has rapidly established a reputation as a trailblazer in AI safety, underpinned by the establishment of the AI Safety Institute. Backed by an initial £100 million in funding, the Institute represents the world’s first state-backed body dedicated to AI safety research. It has already agreed an historic alliance with the United States on AI safety and published its world-first approach to model safety evaluations earlier this year.   

This month’s AI Seoul Summit represents an important opportunity to once again cement AI safety’s place on the international agenda. Attendees will be able to use the interim International AI Safety Report to further the discussions which were kickstarted at November’s AI Safety Summit. A final edition of the report is expected to be released ahead of next round of discussions on AI safety which will be hosted by France.  

The Latest Stories

An Introduction to The AI Act: What You Need To Know 
Edinburgh Business School offers in-demand industry skills through Coursera’s Career Academy
How AI is changing the game for businesses
New research ranks best UK locations for AI businesses