Harris calls for world to define AI security standards

US Vice President Kamala Harris has said the strategy unveiled by the country and its approach to the risks posed by Artificial intelligence (AI) should be used as a blueprint from countries and governments around the world.

Harris’s comments came after she had attended the two day global summit in AI and its risks in the UK. She also warned that disinformation continued to pose a major threat to the planet and its security.

Speaking about the rapid development of AI Harris explained: “We acknowledge, as well, of course, the benefits, but we do acknowledge that there are risks.  And part of the role of the United States in these meetings has been to require that there be some understanding and appreciation for the full spectrum of risks.

“I have spoken about how we should think about existential threats and define them not only by what they may be but who they may harm and how then the definition of an existential threat may differ depending on who is involved and how it rolls out.

“We also have outlined, as part of our global support and the building of global support for the US’s perspective on AI, that we believe and have rolled out what we believe to be the responsible — the responsible military use of AI.  And again, I do believe that will be a model.  Thirty countries so far have joined us in that perspective.

“And then, of course, the G7 agreed to a code of conduct for AI.”

Harris added: “I do believe that the voice that we have offered in this global discussion has been significant.  It has been a priority, for example, for us to ensure that civil society have an equal voice at the table. Civil society has always played a very unique and important role in holding public and private sector to account.

“And as we are thinking about, in particular, safety issues and defining safety, defining risk, and defining who is vulnerable in that context, civil society has an important voice to add to the policies that will be developed.

“So, we have had, I think, a great impact there.  I think that the work that we are doing — the UK, the United States, and the other allies that were there — is also about a commitment that we have made, understanding that there is global action that must be required on this issue, because any one nation who creates laws around AI will invariably have an impact on millions — tens of millions and more around the world.”

Turning to the threat posed by disinformation the vice president said: “So, I am clear that one of the greatest threats to democracy is mis- and disinformation.

“Now, the fact of mis- and disinformation is not new.  But what increasingly has happened with the evolution of technology is mis- and disinformation can spread quickly.  And with AI, in particular, it can take on a form that makes it very difficult for the receiver of that information to distinguish between fact or fiction.

“And I have worked on this issue for many, many years because, of course, I was a career prosecutor for many years, but also, most recently, before becoming vice president, in the Senate, where I served on the Senate Intelligence Committee when we investigated Russia’s interference in the 2016 election.  2016 seems like a lifetime ago in technology.  And even then, we documented evidence of a nation-state attempting to interfere in the election for president of the United States.

“I don’t know of a clearer example you can have of misinformation by nefarious actors being used to upend the people’s confidence in their democracy and the most important pillar of the democracy, which is free and fair and open elections. “

The summit, at Bletchley Park, saw governments and AI companies announcing they recognised that both parties have a crucial role to play in testing the next generation of AI models, to ensure AI safety – both before and after models are deployed.

This includes collaborating on testing the next generation of AI models against a range of potentially harmful capabilities, including critical national security, safety and societal harms.

Those at the summit said they have agreed “governments have a role in seeing that external safety testing of frontier AI models occurs, marking a move away from responsibility for determining the safety of frontier AI models sitting solely with the companies”.

Governments added they have a joint ambition to invest in public sector capacity for testing and other safety research; to share outcomes of evaluations with other countries, where relevant, and to work towards developing, in due course, shared standards in this area – laying the groundwork for future international progress on AI safety in years to come.

The statement builds on the Bletchley Declaration agreed by all countries attending on the first day of the AI Safety Summit. It is one of the several significant steps forward on building a global approach to ensuring safe, responsible AI that has been achieved at the Summit, such as the UK’s launch of a new AI Safety Institute.

The countries represented at Bletchley have also agreed to support Professor Yoshua Bengio, a Turing Award winning AI academic and member of the UN’s Scientific Advisory Board, to lead the first-ever frontier AI ‘State of the Science’ report. This will provide a scientific assessment of existing research on the risks and capabilities of frontier AI and set out the priority areas for further research to inform future work on AI safety.

The findings of the report will support future AI Safety Summits, plans for which have already been set in motion. The Republic of Korea has agreed to co-host a mini virtual summit on AI in the next 6 months. France will then host the next in-person Summit in a year from now.

UK Prime Minister Rishi Sunak said: “Until now the only people testing the safety of new AI models have been the very companies developing it. We shouldn’t rely on them to mark their own homework, as many of them agree.

“We’ve reached a historic agreement, with governments and AI companies working together to test the safety of their models before and after they are released. The UK’s AI Safety Institute will play a vital role in leading this work, in partnership with countries around the world.”

“I do believe that the voice that we have offered in this global discussion has been significant.  It has been a priority, for example, for us to ensure that civil society have an equal voice at the table. Civil society has always played a very unique and important role in holding public and private sector to account.’’

Kamala Harris, US Vice President

SHARE: