EU outlines risk-based regulation of Artificial Intelligence
European Union lawmakers have presented a risk-based proposal for regulating applications of artificial intelligence (AI), in a move which business observers have suggested could have wide-ranging implications for their professional indemnity insurance programmes.
The plan includes several prohibitions, including a China-style social credit scoring system or AI-enabled behaviour manipulation techniques that can cause physical or psychological harm.
There are also restrictions on law enforcement’s use of biometric surveillance in public places, though with wide-ranging exemptions.
Under the proposals, a subset of so-called “high risk” AI uses will be subject to specific regulatory requirements.
There are also transparency requirements for certain uses of AI such as chatbots and so-called ‘deep fakes’ where the EU believes that the potential risk can be mitigated by informing users they are interacting with something artificial.
The planned regulations will be extraterritorial in scope and are intended to apply to any company selling an AI product or service into the EU, not just to EU-based companies and individuals.
“We aim to make Europe world-class in the development of a secure, trustworthy and human-centered Artificial Intelligence, and the use of it,” said the Commission’s executive vice president Margrethe Vestager.
“On the one hand, our regulation addresses the human and societal risks associated with specific uses of AI,” she said. “This is to create trust. On the other hand, our coordinated plan outlines the necessary steps that Member States should take to boost investments and innovation. To guarantee excellence. All this, to ensure that we strengthen the uptake of AI across Europe.”
Under the proposal, mandatory requirements are attached to a “high risk” category of applications of AI — meaning those that present a clear safety risk or threaten to impinge on EU fundamental rights, such as the right to non-discrimination.
Military uses of AI are specifically excluded from scope as the regulation is focused on the bloc’s internal market.
The makers of high risk applications will have a set of obligations to comply with before bringing their product to market, including around the quality of the data-sets used to train their AIs and a level of human oversight over not just design but use of the system — as well as ongoing requirements in the form of post-market surveillance.
Other requirements include a need to create records of the AI system to enable compliance checks and also to provide relevant information to users. The robustness, accuracy and security of the AI system will also be subject to regulation.
There will also be an EU-wide database set up to create a register of high risk systems implemented in the bloc, which will be managed by the Commission.
A new body, called the European Artificial Intelligence Board (EAIB), will also be set up to support a consistent application of the regulation — effectively acting as a mirror to the European Data Protection Board which offers guidance for applying the GDPR.
High-risk systems identified by the EU:
- AI systems identified as high-risk include AI technology used in:
- Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
- Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
- Safety components of products (e.g. AI application in robot-assisted surgery);
- Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
- Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
- Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
- Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
- Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).