Does UK approach to AI lack credibility?

Jamie Boote, associate principal consultant at Synopsys Software Integrity Group and Flavia Kenyon, member of the International Cyber Expo’s Advisory Council and barrister at The 36 Group.

A new report by the Ada Lovelace Institute summarising the UK’s current plans for new AI laws has made 18 recommendations, finding that the legal protections for private citizens to seek redress when AI goes wrong are severely limited.

“The main difference between AI and traditional software is that AI can accomplish tasks with higher speed and accuracy than traditionally programmed software in some areas,” said Boote. 

“The growing pains associated with this rapidly expanding technology are that people are using AI to do things that traditional software couldn’t. This runs the risk of allowing humans to evade accountability, e.g. “It wasn’t me, it was the software”, or perform shady or illegal tasks much faster than was previously possible with humans alone. Coming to grips with the new risk means understanding the new areas that AI could potentially bring automated abuses to.”

“In the past, protecting consumers against abuses by companies has fallen on governments and regulatory bodies. One recent example is how the EU’s GDPR placed strong penalties for violating consumer privacy rights. To use GDPR as a template for consumer protections that could apply to AI, the regulatory body would have to clearly define a set of consumer rights potentially threatened by the use of AI. After the regulatory body defines vulnerable consumer data sets and outcomes, the regulations should then list a series of protections to comply with and have enforceable penalties for violations.”

“Another method could be expanding protections from actions taken by humans to those taken by AI. For example, a human making a discriminatory hiring decision is illegal in many countries and these laws should be expanded or interpreted to assign liability and accountability to businesses for discriminatory outcomes made by AI software they use, own, or operate. Once these actions are identified and ruled illegal even if performed by AI, regulatory bodies could provide recommendations and guidance on how to build and train AI/ML in such a way that results in bots that aren’t trained to break existing, updated, and new laws.”

Flavia Kenyon added: “AI has unleashed an unprecedented technological power of collecting, processing, and analysing vast amounts of personal data scraped off the Internet, which can be used for various purposes, positive and negative as discussed in the new Ada Lovelace Institute report: from medical research that saves lives, to mass surveillance, oppressive and invasive state control, the analysis of facial expressions and other biometric data, the creation of fake, misleading, biased, or inflammatory content, text, images, video and audio that can become weaponised in the wrong hands.”

“The real concern with AI is who controls it, as well as the legal basis upon which the AI tool processes people’s data, the mechanism allowing individuals to control how their data is used, and people’s right to have their data deleted or corrected. The report is an attempt at balancing the need for technological innovation with the protection of people’s rights. This is about our individuality. In truth, the battleground is over whether AI controls us, or whether we control it.”

SHARE: