Addressing AI risk

Things are moving fast in the world of Artificial Intelligence (AI), and world leaders are alert to the risks it poses.

The UK’s Prime Minister Rishi Sunak has hailed last week’s artificial intelligence summit at Bletchley Park – famous for being the site of a stupendous effort to break the Enigma Code in World War Two –  as a diplomatic breakthrough after it produced an international declaration to address risks with the technology, as well as a multilateral agreement to test advanced AI Models.

The UK, US, EU, Australia and China have all agreed that AI poses a potentially catastrophic risk to humanity, in the first international declaration to deal with the fast-emerging technology.

Twenty-eight governments signed up to the so-called Bletchley declaration on the first day of the AI safety summit, hosted by the British government. The countries agreed to work together on AI safety research. As the declarations says, signatories will work on “identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies”. 

Scientific welcome

In the UK, the Academy of Medical Sciences, the British Academy, the Royal Academy of Engineering, and the Royal Society published the joint statement below on the outcomes of the Global AI Safety Summit, urging a focus on short-term risk associated with AI:

“We are now in an era in which AI could profoundly shape the ways we live and work.

It is therefore crucial that we thoughtfully address, and put in place systems to mitigate, both the immediate impacts of how AI affects people’s lives and livelihoods in the near-term as well as the longer-term concerns surrounding existential risk. This is essential to realising the significant benefits and opportunities the technologies can offer, alongside managing risks and preventing harm.”

“AI is not solely a technical matter but one which cuts across science, engineering, humanities, social sciences, and medical sciences. It is essential that all these disciplines come together to address the complex challenges and opportunities presented by AI. A collaborative, interdisciplinary approach in defining how AI is governed will ensure its responsible and safe development and deployment as well as its ethical use. It is crucial that the public is meaningfully involved in these discussions from the outset.”

“Together the UK’s four national academies are committed to championing comprehensive AI safety standards that incorporate ethical and societal considerations into AI development and deployment to effectively balance the risks of AI with its potential to provide great benefits to society.”

Near-term risk

While the longer-term risks are important, the joint declaration added,  near-term risks associated with AI, which have the potential to erode trust and hinder adoption of beneficial new technologies must be addressed:

“This is vital to ensure the benefits of AI are distributed across all of society, from advancements in healthcare and the delivery of critical public services, to the scale up of AI companies and wider improvements to productivity and work across the economy.

International collaboration between governments, AI companies, researchers, and civil society is also critical to understand and effectively manage the risks posed by established technologies and those at the frontier of AI. The challenge of governing future AI technology on a global scale will require sustained efforts by all involved parties. The UK’s four national academies welcome the progress that has been made at the AI Safety Summit. As national and international bodies and future summits are established to build safe AI, the Academies will ensure that experts and professionals across science, social sciences, humanities, engineering and health can support in horizon scanning and responding to risks.”

AI blueprint 

US Vice President Kamala Harris has said the strategy unveiled by the country and its approach to the risks posed by Artificial intelligence (AI) should be used as a blueprint from countries and governments around the world.

Harris’s comments came after she had attended the two day global summit in AI and its risks in the UK. She also warned that disinformation continued to pose a major threat to the planet and its security.

Speaking about the rapid development of AI Harris explained: “We acknowledge, as well, of course, the benefits, but we do acknowledge that there are risks.  And part of the role of the United States in these meetings has been to require that there be some understanding and appreciation for the full spectrum of risks.”

“I have spoken about how we should think about existential threats and define them not only by what they may be but who they may harm and how then the definition of an existential threat may differ depending on who is involved and how it rolls out.”

“We also have outlined, as part of our global support and the building of global support for the US’s perspective on AI, that we believe and have rolled out what we believe to be the responsible — the responsible military use of AI.  And again, I do believe that will be a model.  Thirty countries so far have joined us in that perspective.”

Those at the summit said they have agreed “governments have a role in seeing that external safety testing of frontier AI models occurs, marking a move away from responsibility for determining the safety of frontier AI models sitting solely with the companies”.

Governments added they have a joint ambition to invest in public sector capacity for testing and other safety research; to share outcomes of evaluations with other countries, where relevant, and to work towards developing, in due course, shared standards in this area – laying the groundwork for future international progress on AI safety in years to come.

“We acknowledge, as well, of course, the benefits, but we do acknowledge that there are risks.  And part of the role of the United States in these meetings has been to require that there be some understanding and appreciation for the full spectrum of risks.”

Kamala Harris, US Vice President