The UK’s National Cyber Security Centre (NCSC) has published what it claims to be the world’s first globally agreed guidelines on safe and secure AI development.
The Guidelines for Secure AI System Development were drawn up by the NCSC with help from industry experts and 21 other international agencies and ministries, including the US Cybersecurity and Infrastructure Security Agency (CISA).
A total of 18 countries including all of the G7 have now endorsed and “co-sealed” the guidelines, which will help developers make informed decisions about cybersecurity as they produce new AI systems.
NCSC CEO, Lindy Cameron, argued that the pace of AI development means governments and agencies need to keep up.
“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout,” she added.
“I’m proud that the NCSC is leading crucial efforts to raise the AI cyber security bar: a more secure global cyberspace will help us all to safely and confidently realise this technology’s wonderful opportunities.”