AI

EU lawmakers back transparency and safety rules for generative AI

Comment

Artificial Intelligence for Deep Learning Technology over Top view scene of Motion blurred Crowd unrecognizable pedestrians walking internal subway intersection in rush hour before working hour, central Hong Kong.
Image Credits: Photographer is my life. / Getty Images

In a series of votes in the European Parliament this morning MEPs have backed a raft of amendments to the bloc’s draft AI legislation — including agreeing a set of requirements for so called foundational models which underpin generative AI technologies like OpenAI’s ChatGPT.

The text of the amendments agreed by MEPs in two committees put obligations on providers of foundational models to apply safety checks, data governance measures and risk mitigations prior to putting their models on the market — including obligating them to consider “foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law”.

The amendments also commit foundational model makers to reduce the energy consumption and resource use of their systems and register their systems in an EU database set to be established by the AI Act. While providers of generative AI technologies (such as ChatGPT) are obliged comply with transparency obligations in the regulation (ensuring users are informed the content was machine generated); apply “adequate safeguards” in relation to content their systems generate; and provide a summary of any copyrighted materials used to train their AIs.

In recent weeks MEPs have been focused on ensuring general purpose AI will not escape regulatory requirements, as we reported earlier.

Other key areas of debate for parliamentarians included biometric surveillance — where MEPs also agreed to changes aimed at beefing up protections for fundamental rights.

EU lawmakers eye tiered approach to regulating generative AI

The lawmakers are working towards agreeing the parliament’s negotiating mandate for the AI Act to unlock the next stage of the EU’s co-legislative process.

MEPs in two committees, the Internal Market Committee and the Civil Liberties Committee, voted on some 3,000 amendments today — adopting a draft mandate on the planned artificial intelligence rulebook with 84 votes in favour, 7 against and 12 abstentions.

“In their amendments to the Commission’s proposal, MEPs aim to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly. They also want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow,” the parliament said in a press release.

Among the key amendments agreed by the committees today are an expansion of the list of prohibited practices — adding bans on “intrusive” and “discriminatory” uses of AI systems such as:

  • “Real-time” remote biometric identification systems in publicly accessible spaces;
  • “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
  • Predictive policing systems (based on profiling, location or past criminal behaviour);
  • Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

The latter, which would outright ban the business model of the controversial US AI company Clearview AI comes a day after France’s data protection watchdog hit the startup with another fine for failing to comply with existing EU laws. So there’s no doubt enforcement of such prohibitions on foreign entities that opt to flout the bloc’s rules will remain a challenge. But the first step is to have hard law.

Commenting after the vote in a statement, co-rapporteur and MEP Dragos Tudorache, added:

Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy and safe. We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement.

A plenary vote in parliament to seal the mandate is expected next month (during the 12-15 June session), after which trilogue talks will kick off with the Council toward agreeing a final compromise on the file.

Back in 2021, when the Commission’ presented its draft proposal for the AI Act it suggested the risk-based framework would create a blueprint for “human” and “trustworthy” AI. However concerns were quickly raised that the plan fell far short of the mark — including in areas related to biometric surveillance, with the Commission only proposing a limited ban on use of highly intrusive technology like facial recognition in public.

Civil society groups and EU bodies pressed for amendments to bolster protections for fundamental rights — with the European Data Protection Supervisor and European Data Protection Board among those calling for the legislation to go further and urging EU lawmakers to put a total ban on biometrics surveillance in public.

MEPs appear to have largely heeded civil society’s call. Although concerns do remain. (And of course it remains to be seen how the proposal MEPs have strengthened could get watered back down again as Member States governments enter the negotiations in the coming months.)

Other changes parliamentarians agreed in today’s committee votes include expansions to the regulation’s (fixed) classification of “high-risk” areas — to include harm to people’s health, safety, fundamental rights and the environment.

AI systems used to influence voters in political campaigns and those used in recommender systems by larger social media platforms (with more than 45 million users, aligning with the VLOPs classification in the Digital Services Act), were also put on the high-risk list.

At the same time, though, MEPs backed changes to what counts as high risk — proposing to leave it up to AI developers to decide if their system is significant enough to meet the bar where obligations applying, something digital rights groups are warning (see below) is “a major red flag” for enforcing the rules.

Elsewhere, MEPs backed amendments aimed at boosting citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that “significantly” impact their rights.

The lack of meaningful redress for individuals affected by harmful AIs was a major loophole raised by civil society groups in a major call for revisions in fall 2021 who pointed out the glaring difference between the Commission’s AI Act proposal and the bloc’s General Data Protection Act, under which individuals can complain to regulators and pursue other forms of redress.

Another change MEPs agree on today is a reformed role for body called the EU AI Office, which they want to monitor how the rulebook is implemented — to supplement decentralized oversight of the regulation at the Member State level.

While, in a nod to the perennial industry cry that too much regulation is harmful for “innovation”, they also added exemptions to rules for research activities and AI components provided under open-source licenses, while noting the law promotes regulatory sandboxes, or controlled environments, being established by public authorities to test AI before its deployment.

Digital rights group EDRi, which has been urging major revisions to the Commission draft, said everything it had been pushing for was passed by MEPs “in some form or another” — flagging particularly the (now) full ban on facial recognition in public; along with (new) bans on predictive policing, emotion recognition and on other harmful uses of AI.

Another key win it points to is the inclusion of accountability and transparency obligations on deployers of high risk AI — applying on them a duty to do a fundamental rights impact assessment and mechanisms by which people affected can challenge AI systems.

“The Parliament is sending a clear message to governments and AI developers with its list of bans, ceding civil society’s demands that some uses of AI are just too harmful to be allowed, Sarah Chander, EDRi senior policy advisor,” told TechCrunch.

“This new text is a vast improvement from the Commission’s original proposal when it comes to reigning in the abuse of sensitive data about our faces, bodies, and identities,” added, Ella Jakubowska, an EDRi senior policy advisor who has focused on biometrics. 

However EDRi said there are still areas of concern — pointing to use of AI for migration control as one big one.

On this, Chander noted that MEPs failed to include in the list of prohibited practices where AI is used to facilitate “illegal pushbacks”, or to profile people in a discriminatory manner — which is something EDRi had called for. “Unfortunately, the [European Parliament’s] support for peoples’ rights stops short of protecting migrants from AI harms, including where AI is used to facilitate pushbacks,” she said, suggesting: “Without these prohibitions the European Parliament is opening the door for a  panopticon at the EU border.”

The group said it would also like to see improvements to the proposed ban on predictive policing — to cover location based predictive policing which Chander described as “essentially a form of automated racial profiling”. She said it’s worried that the proposed remote biometrics identification ban won’t cover the full extent of mass surveillance practices it’s seen being used across Europe.

“Whilst the Parliament’s approach is very comprehensive [on biometrics], there are a few practices that we would like to see even further restricted. Whilst there is a ban on retrospective public facial recognition, it contains an exception for law enforcement use which we still consider to be too risky. In particular, it could incentivise mass retention of CCTV footage and biometric data, which we would clearly oppose,” added Jakubowska, saying it would also want to see the EU outlaw emotion recognition no matter the context — “as this ‘technology’ is fundamentally flawed, unscientific, and discriminatory by design”.

Another concern EDRi flags is MEPs’ proposal to let AI developers judge if their systems are high risk or not — as this risk undermining enforceability.

“Unfortunately, the Parliament is proposing some very worrying changes relating to what counts as ‘high-risk AI. With the changes in the text, developers will be able to decide if their system is ‘significant’ enough to be considered high risk, a major red flag for the enforcement of this legislation,” Chander suggested.

While today’s committee vote is a big step towards setting the parliament’s mandate — and setting the tone for the upcoming trilogue talks with the Council — much could still change and there is likely to be some pushback from Member States governments, which tend to be more focused on national security considerations than caring for fundamental rights.

Asked whether it’s expecting the Council to try to unpick some of the expanded protections against biometric surveillance Jakubowska said: “We can see from the Council’s general approach last year that they want to water down the already insufficient protections in the Commission’s original text. Despite having no credible evidence of effectiveness — and lots of evidence of the harms — we see that many member state governments are keen to retain the ability to conduct biometric mass surveillance.

“They often do this under the pretence of ‘national security’ such as in the case of the French Olympics and Paralympics, and/or as part of broader trends criminalising migration and other minoritised communities. That being said, we saw what could be considered ‘dissenting opinions’ from both Austria and Germany, who both favour stronger protections of biometric data in the AI Act. And we’ve heard rumours that several other countries are willing to make compromises in the direction of the biometrics provisions. This gives us hope that there will be a positive outcome from the trilogues, even though we of course expect a strong push back from several Member States.”

Giving another early assessment from civil society, Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties (ICCL), which also joined the 2021 call for major revisions to the AI Act, also cautioned over enforcement challenges — warning that while the parliament has strengthened enforceability by amendments that explicitly allow regulators to perform remote inspections, he suggested MEPs are simultaneously tying regulators hands by preventing them access to source code of AI systems for investigations.

“We are also concerned that we will see a repeat of GDPR-like enforcement problems,” he told TechCrunch.

On the plus side he said MEPs have taken a step towards addressing “the shortcomings” of the Commission’s definition of AI systems — notably with generative AI systems being brought in scope and the application of transparency obligations on them, which he dubbed “a key step towards addressing their harms”.

But — on the issue of copyright and AI training data — Shrishak was critical of the lack of a “firm stand” by MEPs to stop data mining giants from ingesting information for free, including copyright-protected data.

The copyright amendment only requires companies to provide a summary of copyright-protected data used for training — suggesting it will be left up to rights holders to sue.

Asked about possible concerns that exemptions for research activities and AI components provided under open source licenses might create fresh loopholes for AI giants to escape the rules, he agreed that’s a worry.

Research is a loophole that is carried over from the scope of the regulation. This is likely to be exploited by companies,” he suggested. “In the context of AI it is a big loophole considering large parts of the research is taking place in companies. We already see Google saying they are ‘experimenting’ with Bard. Further to this, I expect some companies to claim that they develop AI components and not AI systems (I already heard this from one large corporation during discussions on General purpose AI. This was one of their arguments for why GPAI [general purpose AI] should not be regulated).”

However the Free Software Foundation Europe argues that a provision which limits the open source exemption to “micro-enterprises” means it will be difficult for tech giants to appropriate as a loophole in practice.

Alexander Sander, a senior policy consultant for the Foundation, told TechCrunch: “It is highly unlikely that Big Tech is outsourcing everything to Micro-Enterprises without deploying it afterwards. Once they deploy they fall under the regulation again (if bigger than a Micro-Enterprise).”

“In fact we safeguard developers with this decision and shift responsibilities to those who deploy and significantly benefit on the market,” he also suggested.

He added that the organization is generally quite happy with the MEPs’ proposal — while critiquing some “super complicated wording” and the fact the stipulation has just been included in a recital (i.e. rather than an article).

Clearer wording around activities “between micro-enterprises” would also be welcomed by the group, he said, as it wants this to also cover activities between non-profit and micro-enterprises.

This report was updated with additional response from the Free Software Foundation Europe 

Europe’s AI Act falls far short on protecting fundamental rights, civil society groups warn

More TechCrunch

iOS 18 will be available in the fall as a free software update.

Here are all the devices compatible with iOS 18

The tests indicate there are loopholes in TikTok’s ability to apply its parental controls and policies effectively in a situation where the teen user originally lied about their age, as…

TikTok glitch allows Shop to appear to users under 18, despite adults-only policy

Lhoopa has raised $80 million to address the lack of affordable housing in Southeast Asian markets, starting with the Philippines.

Lhoopa raises $80M to spur more affordable housing in the Philippines

Former President Donald Trump picked Ohio Senator J.D. Vance as his running mate on Monday, as he runs to reclaim the office he lost to President Joe Biden in 2020.…

Trump’s VP candidate JD Vance has long ties to Silicon Valley, and was a VC himself

Hello and welcome back to TechCrunch Space. Is it just me, or is the news cycle only accelerating this summer?!

TechCrunch Space: Space cowboys

Apple Intelligence features are not available in the developer beta, which is out now.

Without Apple Intelligence, iOS 18 beta feels like a TV show that’s waiting for the finale

Apple released the public betas for its next generation of software on the iPhone, Mac, iPad and Apple Watch on Monday. You can now test out iOS 18 and many…

Apple’s public betas for iOS 18 are here to test out

One major dissenter threatens to upend Fisker’s apparent best chance at offloading its unsold EVs, a deal that would keep the startup’s bankruptcy proceeding alive and pave the way for…

Fisker has one major objector to its Ocean SUV fire sale

Payments giant Stripe has delayed going public for so long that its major investor Sequoia Capital is getting creative to offer returns to its limited partners. The venture firm emailed…

Major Stripe investor Sequoia confirms $70B valuation, offers its investors a payday

Alphabet, Google’s parent company, is in advanced talks to acquire Wiz for $23 billion, a person close to the company told TechCrunch. The deal discussions were previously reported by The…

Google’s Kurian approached Wiz, $23B deal could take a week to land, source says

Name That Bird determines individual members of a species by identifying distinguishing characteristics that most humans would be hard-pressed to spot.

Bird Buddy’s new AI feature lets people name and identify individual birds

YouTube Music is introducing two new ways to boost song discovery on its platform. YouTube announced on Monday that it’s experimenting with an AI-generated conversational radio feature, and rolling out…

YouTube Music is testing an AI-generated radio feature and adding a song recognition tool

Tesla had internally planned to build the dedicated robotaxi and the $25,000 car, often referred to as the Model 2, on the same platform.

Elon Musk confirms Tesla ‘robotaxi’ event delayed due to design change

What this means for the space industry is that theory has become reality: The possibility of designing a habitation within a lunar tunnel is a reasonable proposition.

Moon cave! Discovery could redirect lunar colony and startup plays

Get ready for a prime week of savings at TechCrunch Disrupt 2024 with the launch of Disrupt Deal Days! From now to July 19 at 11:59 p.m. PT, we’re going…

Disrupt Deal Days are here: Prime savings for TechCrunch Disrupt 2024!

Deezer is the latest music streaming app to introduce an AI playlist feature. The company announced on Monday that a select number of paid users will be able to create…

Deezer chases Spotify and Amazon Music with its own AI playlist generator

Real-time payments are becoming commonplace for individuals and businesses, but not yet for cross-border transactions. That’s what Caliza is hoping to change, starting with Latin America. Founded in 2021 by…

Caliza lands $8.5 million to bring real-time money transfers to Latin America using USDC

Adaptive is a platform that provides tools designed to simplify payments and accounting for general construction contractors.

Adaptive builds automation tools to speed up construction payments

When VanMoof declared bankruptcy last year, it left around 5,000 customers who had preordered e-bikes in the lurch. Now VanMoof is up and running under new management, and the company’s…

How VanMoof’s new owners plan to win over its old customers

Mitti Labs aims to transform rice farming in India and other South Asian markets by reducing methane emissions by 50% and water consumption by 30%.

Mitti Labs aims to make rice farming less harmful to the climate, starting in India

This is a guide on how to check whether someone compromised your online accounts.

How to tell if your online accounts have been hacked

There is a general consensus today that generative AI is going to transform business in a profound way, and companies and individuals who don’t get on board will be quickly…

The AI financial results paradox

Google’s parent company Alphabet might be on the verge of making its biggest acquisition ever. The Wall Street Journal reports that Alphabet is in advanced talks to acquire Wiz for…

Google reportedly in talks to acquire cloud security company Wiz for $23B

Featured Article

Hank Green reckons with the power — and the powerlessness — of the creator

Hank Green has had a while to think about how social media has changed us. He started making YouTube videos in 2007 with his brother, novelist John Green, at a time when the first iPhone was in development, Myspace was still relevant and Instagram didn’t exist. Seventeen years later, posting…

Hank Green reckons with the power — and the powerlessness — of the creator

Here is a timeline of Synapse’s troubles and the ongoing impact it is having on banking consumers. 

Synapse’s collapse has frozen nearly $160M from fintech users — here’s how it happened

Featured Article

Helixx wants to bring fast-food economics and Netflix pricing to EVs

When Helixx co-founder and CEO Steve Pegg looks at Daisy — the startup’s 3D-printed prototype delivery van — he sees a second chance. And he’s pulling inspiration from McDonald’s to get there.  The prototype, which made its global debut this week at the Goodwood Festival of Speed, is an interesting proof…

Helixx wants to bring fast-food economics and Netflix pricing to EVs

Featured Article

India clings to cheap feature phones as brands struggle to tap new smartphone buyers

India is struggling to get new smartphone buyers, as millions of Indians don’t go for an upgrade and continue to be on feature phones.

India clings to cheap feature phones as brands struggle to tap new smartphone buyers

Roboticists at The Faboratory at Yale University have developed a way for soft robots to replicate some of the more unsettling things that animals and insects can accomplish — say,…

Meet the soft robots that can amputate limbs and fuse with other robots

Featured Article

If you’re an AT&T customer, your data has likely been stolen

This week, AT&T confirmed it will begin notifying around 110 million AT&T customers about a data breach that allowed cybercriminals to steal the phone records of “nearly all” of its customers. The stolen data contains phone numbers and AT&T records of calls and text messages during a six-month period in…

If you’re an AT&T customer, your data has likely been stolen

In the first half of 2024 alone, more than $35.5 billion was invested into AI startups globally.

Here’s the full list of 28 US AI startups that have raised $100M or more in 2024