AI

Europe lays out plan for risk-based AI rules to boost trust and uptake

Comment

3D rendered depiction of a digital avatar
Image Credits: DKosig / Getty Images

European Union lawmakers have presented their risk-based proposal for regulating high risk applications of artificial intelligence within the bloc’s single market.

The plan includes prohibitions on a small number of use-cases that are considered too dangerous to people’s safety or EU citizens’ fundamental rights, such as a China-style social credit scoring system or AI-enabled behavior manipulation techniques that can cause physical or psychological harm. There are also restrictions on law enforcement’s use of biometric surveillance in public places — but with very wide-ranging exemptions.

Most uses of AI won’t face any regulation (let alone a ban) under the proposal. But a subset of so-called “high risk” uses will be subject to specific regulatory requirements, both ex ante (before) and ex post (after) launching into the market.

There are also transparency requirements for certain use-cases of AI — such as chatbots and deepfakes — where EU lawmakers believe that potential risk can be mitigated by informing users they are interacting with something artificial.

The planned law is intended to apply to any company selling an AI product or service into the EU, not just to EU-based companies and individuals — so, as with the EU’s data protection regime, it will be extraterritorial in scope.

The overarching goal for EU lawmakers is to foster public trust in how AI is implemented to help boost uptake of the technology. Senior Commission officials talk about wanting to develop an “excellence ecosystem” that’s aligned with European values.

“Today, we aim to make Europe world-class in the development of a secure, trustworthy and human-centered Artificial Intelligence, and the use of it,” said Commission EVP, Margrethe Vestager, announcing adoption of the proposal at a press conference.

“On the one hand, our regulation addresses the human and societal risks associated with specific uses of AI. This is to create trust. On the other hand, our coordinated plan outlines the necessary steps that Member States should take to boost investments and innovation. To guarantee excellence. All this, to ensure that we strengthen the uptake of AI across Europe.”

Under the proposal, mandatory requirements are attached to a “high risk” category of applications of AI — meaning those that present a clear safety risk or threaten to impinge on EU fundamental rights (such as the right to non-discrimination).

Examples of high risk AI use-cases that will be subject to the highest level of regulation on use are set out in annex 3 of the regulation — which the Commission said it will have the power to expand by delegated acts, as use-cases of AI continue to develop and risks evolve.

For now cited high risk examples fall into the following categories: Biometric identification and categorisation of natural persons; Management and operation of critical infrastructure; Education and vocational training; Employment, workers management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; Administration of justice and democratic processes.

Military uses of AI are specifically excluded from scope as the regulation is focused on the bloc’s internal market.

The makers of high risk applications will have a set of ex ante obligations to comply with before bringing their product to market, including around the quality of the data-sets used to train their AIs and a level of human oversight over not just design but use of the system — as well as ongoing, ex post requirements, in the form of post-market surveillance.

Other requirements include a need to create records of the AI system to enable compliance checks and also to provide relevant information to users. The robustness, accuracy and security of the AI system will also be subject to regulation.

Commission officials suggested the vast majority of applications of AI will fall outside this highly regulated category. Makers of those ‘low risk’ AI systems will merely be encouraged to adopt (non-legally binding) codes of conduct on use.

Penalties for infringing the rules on specific AI use-case bans have been set at up to 6% of global annual turnover or €30M (whichever is greater). While violations of the rules related to high risk applications can scale up to 4% (or €20M).

Enforcement will involve multiple agencies in each EU Member State — with the proposal intending oversight be carried out by existing (relevant) agencies, such as product safety bodies and data protection agencies.

That raises immediate questions over adequate resourcing of national bodies, given the additional work and technical complexity they will face in policing the AI rules; and also how enforcement bottlenecks will be avoided in certain Member States. (Notably, the EU’s General Data Protection Regulation is also overseen at the Member State level and suffers from lack of uniformly vigorous enforcement.)

But the Commission does appear to have wised up to the risk of enforcement blockages: Article 37 of the proposal gives the EU executive power to investigate cases where “there are reasons to doubt whether a notified body complies with the requirements laid down in Article 33”. And also the power to “adopt a reasoned decision” where a Member State agency has failed to meet its obligations. 

There will also be an EU-wide database set up to create a register of high risk systems implemented in the bloc (which will be managed by the Commission).

A new body, called the European Artificial Intelligence Board (EAIB), will also be set up to support a consistent application of the regulation — in a mirror to the European Data Protection Board which offers guidance for applying the GDPR.

In step with rules on certain uses of AI, the plan includes measures to co-ordinate EU Member State support for AI development, under a 2021 update to the EU’s 2018 Coordinated Plan — such as by establishing regulatory sandboxes and co-funding Testing and Experimentation Facilities to help startups and SMEs develop and accelerate AI-fuelled innovations; and by establishing a network of European Digital Innovation Hubs intended as ‘one-stop shops’ to help SMEs and public administrations become more competitive in this area — and via the prospect of targeted EU funding to support homegrown AI.

Internal market commissioner Thierry Breton said investment is a crucial piece of the plan. “Under our Digital Europe and Horizon Europe program we are going to free up a billion euros per year. And on top of that we want to generate private investment and a collective EU-wide investment of €20BN per year over the coming decade — the ‘digital decade’ as we have called it,” he said during today’s press conference. “We also want to have €140BN which will finance digital investments under Next Generation EU [COVID-19 recovery fund] — and going into AI in part.”

Shaping rules for AI has been a key priority for EU president Ursula von der Leyen who took up her post at the end of 2019. A white paper was published last year, following a 2018 AI for EU strategy — and Vestager said that today’s proposal is the culmination of three years’ work.

Breton suggested that providing guidance for businesses to apply AI will give them legal certainty and Europe an edge.

“Trust… we think is vitally important to allow the development we want of artificial intelligence,” he said. [Applications of AI] need to be trustworthy, safe, non-discriminatory — that is absolutely crucial — but of course we also need to be able to understand how exactly these applications will work.”

“What we need is to have guidance. Especially in a new technology… We are, we will be, the first continent where we will give guidelines — we’ll say ‘hey, this is green, this is dark green, this is maybe a little bit orange and this is forbidden’. So now if you want to use artificial intelligence applications, go to Europe! You will know what to do, you will know how to do it, you will have partners who understand pretty well and, by the way, you will come also in the continent where you will have the largest amount of industrial data created on the planet for the next ten years.

“So come here — because artificial intelligence is about data — we’ll give you the guidelines. We will also have the tools to do it and the infrastructure.”

A version of today’s proposal leaked last week — leading to calls by MEPs to beef up the plan, such as by banning remote biometric surveillance in public places.

In the event the final proposal does treat remote biometric surveillance as a particularly high risk application of AI — and there is a prohibition in principal on the use of the technology in public by law enforcement.

However use is not completely proscribed, with a number of exceptions where law enforcement would still be able to make use of it, subject to a valid legal basis and appropriate oversight.

Protections attacked as too weak

Reactions to the Commission’s proposal included plenty of criticism of overly broad exemptions for law enforcement’s use of remote biometric surveillance (such as facial recognition tech) as well as concerns that measures in the regulation to address the risk of AI systems discriminating don’t go nearly far enough.

Criminal justice NGO, Fair Trials, said radical improvements are needed if the regulation is to contain meaningful safeguards in relation to criminal justice. Commenting in a statement, Griff Ferris, legal and policy officer for the NGO said: “The EU’s proposals need radical changes to prevent the hard-wiring of discrimination in criminal justice outcomes, protect the presumption of innocence and ensure meaningful accountability for AI in criminal justice. 

“The legislation lacks any safeguards against discrimination, while the wide-ranging exemption for ‘safeguarding public security’ completely undercuts what little safeguards there are in relation to criminal justice. The framework must include rigorous safeguards and restrictions to prevent discrimination and protect the right to a fair trial. This should include restricting the use of systems that attempt to profile people and predict the risk of criminality.” 

The Civil Liberties Union for Europe (Liberties) also hit out at loopholes that the NGO said would allow EU Member States to get around bans on problematic uses of AI.

“There are way too many problematic uses of the technology that are allowed, such as the use of algorithms to forecast crime or to have computers assess the emotional state of people at border control, both of which constitute serious human rights risks and pose a threat to the values of the EU,” warned Orsolya Reich, senior advocacy officer, in a statement. “We are also concerned that the police could use facial recognition technology in ways that endanger our fundamental rights and freedoms.”

Patrick Breyer, German Pirate MEP, warned that the proposal falls short of meeting the claimed bar of respect for ‘European values’. The MEP was one of 40 who signed a letter to the Commission last week warning then that a leaked version of the proposal didn’t go far enough in protecting fundamental rights.

“We must seize the opportunity to let the European Union bring artificial intelligence in line with ethical requirements and democratic values. Unfortunately, the Commission’s proposal fails to protect us from the dangers gender justice and equal treatment of all groups, such as through facial recognition systems or other kinds of mass surveillance,” said Breyer in a statement reacting to the formal proposal today.

“Biometric and mass surveillance, profiling and behavioural prediction technology in our public spaces undermines our freedom and threatens our open societies. The European Commission’s proposal would bring the high-risk use of automatic facial recognition in public spaces to the entire European Union, contrary to the will of the majority of our people. The proposed procedural requirements are a mere smokescreen. We cannot allow the discrimination of certain groups of people and the false incrimination of countless individuals by these technologies”

European digital rights group, Edri, also highlighted what it dubbed a “worrying gap” in the proposal around “discriminatory and surveillance technologies”. “The regulation allows too wide a scope for self-regulation by companies profiting from AI. People, not companies need to be the centre of this regulation,” said Sarah Chander, senior policy lead on AI at Edri, in a statement.

Access Now raised similar concerns in an initial reaction, saying the proposed prohibitions are “too limited”, and the legal framework “does nothing to stop the development or deployment of a host of applications of AI that drastically undermine social progress and fundamental rights”.

But the digital rights group welcomed transparency measures such as the publicly accessible database of high risk systems to be established — and the fact the regulation does include some prohibitions (albeit, which it said don’t go far enough).

Consumer rights umbrella group, BEUC, was also swiftly critical of the proposal — attacking the Commission proposal as weak on consumer protection because it focuses on regulating “a very limited range of AI uses and issues”.

“The European Commission should have put more focus on helping consumers trust AI in their daily lives,” said Monique Goyens, Beuc director general, in a statement: “People should be able to trust any product or service powered by artificial intelligence, be it ‘high-risk’, ‘medium-risk’ or ‘low-risk’. The EU must do more to ensure consumers have enforceable rights, as well as access to redress and remedies in case something goes wrong.”

New rules on machinery are also part of the legislative package — with adapted safety rules intended to take account of AI-fuelled changes (with the Commission saying it wants businesses which are integrating AI into machinery to only need to carry out one conformity assessment to comply with the framework).

Tech industry group Dot Europe (formerly Edima) — whose members include Airbnb, Apple, Facebook, Google, Microsoft and other platform giants — welcomed the release of the Commission’s AI proposal but had yet to offer detailed remarks at the time of writing, saying it was formulating its position.

While startup advocacy group, Allied For Startups, told us it also needs time to study the detail of the proposal, Benedikt Blomeyer, its EU policy director, warned over the potential risk of burdening startups. “Our initial reaction is that if done wrong, this could significantly increase the regulatory burden placed on startups,” he said. “The key question for this proposal will be whether it is proportionate to the potential risks that AI poses whilst ensuring that European startups can also take advantage of its potential benefits”.

Other tech lobby groups didn’t wait to go on the attack at the prospect of bespoke red tape wrapping AI — claiming the regulation would “kneecap the EU’s nascent AI industry before it can learn to walk” as one Washington- and Brussels-based tech policy thinktank (the Center for Data Innovation) put it.

The CCIA trade association also quickly warned against “unnecessary red tape for developers and users”, adding that regulation alone won’t make the EU a leader in AI.

Today’s proposal kicks off the start of plenty of debate under the EU’s co-legislative process, with the European Parliament and Member States via the EU Council needing to have their say on the draft — meaning a lot could change before EU institutions reach agreement on the final shape of a pan-EU AI regulation.

Commissioners declined to give a timeframe for when legislation might be adopted today, saying only that they hoped the other EU institutions would engage immediately and that the process could be done asap. It could, nonetheless, be several years before the regulation is ratified and comes in force.

This report was updated with reactions to the Commission proposal, and with additional detail about the proposed enforcement structure (Article 37)

Understanding Europe’s big push to rewrite the digital rulebook

Europe sets out the rules of the road for its data reuse plan

Europe sets out plan to boost data reuse and regulate ‘high risk’ AIs

Here are the experts who will help shape Europe’s AI policy

More TechCrunch

iOS 18 will be available in the fall as a free software update.

Here are all the devices compatible with iOS 18

The tests indicate there are loopholes in TikTok’s ability to apply its parental controls and policies effectively in a situation where the teen user originally lied about their age, as…

TikTok glitch allows Shop to appear to users under 18, despite adults-only policy

Lhoopa has raised $80 million to address the lack of affordable housing in Southeast Asian markets, starting with the Philippines.

Lhoopa raises $80M to spur more affordable housing in the Philippines

Former President Donald Trump picked Ohio Senator J.D. Vance as his running mate on Monday, as he runs to reclaim the office he lost to President Joe Biden in 2020.…

Trump’s VP candidate JD Vance has long ties to Silicon Valley, and was a VC himself

Hello and welcome back to TechCrunch Space. Is it just me, or is the news cycle only accelerating this summer?!

TechCrunch Space: Space cowboys

Apple Intelligence features are not available in the developer beta, which is out now.

Without Apple Intelligence, iOS 18 beta feels like a TV show that’s waiting for the finale

Apple released the public betas for its next generation of software on the iPhone, Mac, iPad and Apple Watch on Monday. You can now test out iOS 18 and many…

Apple’s public betas for iOS 18 are here to test out

One major dissenter threatens to upend Fisker’s apparent best chance at offloading its unsold EVs, a deal that would keep the startup’s bankruptcy proceeding alive and pave the way for…

Fisker has one major objector to its Ocean SUV fire sale

Payments giant Stripe has delayed going public for so long that its major investor Sequoia Capital is getting creative to offer returns to its limited partners. The venture firm emailed…

Major Stripe investor Sequoia confirms $70B valuation, offers its investors a payday

Alphabet, Google’s parent company, is in advanced talks to acquire Wiz for $23 billion, a person close to the company told TechCrunch. The deal discussions were previously reported by The…

Google’s Kurian approached Wiz, $23B deal could take a week to land, source says

Name That Bird determines individual members of a species by identifying distinguishing characteristics that most humans would be hard-pressed to spot.

Bird Buddy’s new AI feature lets people name and identify individual birds

YouTube Music is introducing two new ways to boost song discovery on its platform. YouTube announced on Monday that it’s experimenting with an AI-generated conversational radio feature, and rolling out…

YouTube Music is testing an AI-generated radio feature and adding a song recognition tool

Tesla had internally planned to build the dedicated robotaxi and the $25,000 car, often referred to as the Model 2, on the same platform.

Elon Musk confirms Tesla ‘robotaxi’ event delayed due to design change

What this means for the space industry is that theory has become reality: The possibility of designing a habitation within a lunar tunnel is a reasonable proposition.

Moon cave! Discovery could redirect lunar colony and startup plays

Get ready for a prime week of savings at TechCrunch Disrupt 2024 with the launch of Disrupt Deal Days! From now to July 19 at 11:59 p.m. PT, we’re going…

Disrupt Deal Days are here: Prime savings for TechCrunch Disrupt 2024!

Deezer is the latest music streaming app to introduce an AI playlist feature. The company announced on Monday that a select number of paid users will be able to create…

Deezer chases Spotify and Amazon Music with its own AI playlist generator

Real-time payments are becoming commonplace for individuals and businesses, but not yet for cross-border transactions. That’s what Caliza is hoping to change, starting with Latin America. Founded in 2021 by…

Caliza lands $8.5 million to bring real-time money transfers to Latin America using USDC

Adaptive is a platform that provides tools designed to simplify payments and accounting for general construction contractors.

Adaptive builds automation tools to speed up construction payments

When VanMoof declared bankruptcy last year, it left around 5,000 customers who had preordered e-bikes in the lurch. Now VanMoof is up and running under new management, and the company’s…

How VanMoof’s new owners plan to win over its old customers

Mitti Labs aims to transform rice farming in India and other South Asian markets by reducing methane emissions by 50% and water consumption by 30%.

Mitti Labs aims to make rice farming less harmful to the climate, starting in India

This is a guide on how to check whether someone compromised your online accounts.

How to tell if your online accounts have been hacked

There is a general consensus today that generative AI is going to transform business in a profound way, and companies and individuals who don’t get on board will be quickly…

The AI financial results paradox

Google’s parent company Alphabet might be on the verge of making its biggest acquisition ever. The Wall Street Journal reports that Alphabet is in advanced talks to acquire Wiz for…

Google reportedly in talks to acquire cloud security company Wiz for $23B

Featured Article

Hank Green reckons with the power — and the powerlessness — of the creator

Hank Green has had a while to think about how social media has changed us. He started making YouTube videos in 2007 with his brother, novelist John Green, at a time when the first iPhone was in development, Myspace was still relevant and Instagram didn’t exist. Seventeen years later, posting…

Hank Green reckons with the power — and the powerlessness — of the creator

Here is a timeline of Synapse’s troubles and the ongoing impact it is having on banking consumers. 

Synapse’s collapse has frozen nearly $160M from fintech users — here’s how it happened

Featured Article

Helixx wants to bring fast-food economics and Netflix pricing to EVs

When Helixx co-founder and CEO Steve Pegg looks at Daisy — the startup’s 3D-printed prototype delivery van — he sees a second chance. And he’s pulling inspiration from McDonald’s to get there.  The prototype, which made its global debut this week at the Goodwood Festival of Speed, is an interesting proof…

Helixx wants to bring fast-food economics and Netflix pricing to EVs

Featured Article

India clings to cheap feature phones as brands struggle to tap new smartphone buyers

India is struggling to get new smartphone buyers, as millions of Indians don’t go for an upgrade and continue to be on feature phones.

India clings to cheap feature phones as brands struggle to tap new smartphone buyers

Roboticists at The Faboratory at Yale University have developed a way for soft robots to replicate some of the more unsettling things that animals and insects can accomplish — say,…

Meet the soft robots that can amputate limbs and fuse with other robots

Featured Article

If you’re an AT&T customer, your data has likely been stolen

This week, AT&T confirmed it will begin notifying around 110 million AT&T customers about a data breach that allowed cybercriminals to steal the phone records of “nearly all” of its customers. The stolen data contains phone numbers and AT&T records of calls and text messages during a six-month period in…

If you’re an AT&T customer, your data has likely been stolen

In the first half of 2024 alone, more than $35.5 billion was invested into AI startups globally.

Here’s the full list of 28 US AI startups that have raised $100M or more in 2024