Privacy

ChatGPT is violating Europe’s privacy laws, Italian DPA tells OpenAI

Comment

OpenAI logo is being displayed on a mobile phone screen in front of computer screen with the logo of ChatGPT
Image Credits: Didem Mente/Anadolu Agency / Getty Images

OpenAI has been told it’s suspected of violating European Union privacy, following a multi-month investigation of its AI chatbot, ChatGPT, by Italy’s data protection authority.

Details of the Italian authority’s draft findings haven’t been disclosed. But the Garante said today OpenAI has been notification and given 30 days to respond with a defence against the allegations.

Confirmed breaches of the pan-EU regime can attract fines of up to €20 million, or up to 4% of global annual turnover. More uncomfortably for an AI giant like OpenAI, data protection authorities (DPAs) can issue orders that require changes to how data is processed in order to bring an end to confirmed violations. So it could be forced to change how it operates. Or pull its service out of EU Member States where privacy authorities seek to impose changes it doesn’t like.

OpenAI was contacted for a response to the Garante’s notification of violation. We’ll update this report if they send a statement.

Update: OpenAI said:

We believe our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy. We want our AI to learn about the world, not about private individuals. We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people. We plan to continue to work constructively with the Garante.

AI model training lawfulness in the frame

The Italian authority raised concerns about OpenAI’s compliance with the bloc’s General Data Protection Regulation (GDPR) last year — when it ordered a temporary ban on ChatGPT’s local data processing which led to the AI chatbot being temporarily suspended in the market.

The Garante’s March 30 provision to OpenAI, aka a “register of measures”, highlighted both the lack of a suitable legal basis for the collection and processing of personal data for the purpose of training the algorithms underlying ChatGPT; and the tendency of the AI tool to ‘hallucinate'(i.e. its potential to produce inaccurate information about individuals) — as among its issues of concern at that point. It also flagged child safety as a problem.

In all, the authority said that it suspected ChatGPT to be breaching Articles 5, 6, 8, 13 and 25 of the GDPR.

Despite identifying this laundry list of suspected violations, OpenAI was able to resume service of ChatGPT in Italy relatively quickly last year, after taking steps to address some issues raised by the DPA. However the Italian authority said it would continue to investigate the suspected violations. It’s now arrived at preliminary conclusions the tool is breaking EU law.

While the Italian authority hasn’t yet said which of the previously suspected ChatGPT breaches it’s confirmed at this stage, the legal basis OpenAI claims for processing personal data to train its AI models looks like a particularly crux issue.

This is because ChatGPT was developed using masses of data scraped off the public Internet — information which includes the personal data of individuals. And the problem OpenAI faces in the European Union is that processing EU people’s data requires it to have a valid legal basis.

The GDPR lists six possible legal bases — most of which are just not relevant in its context. Last April, OpenAI was told by the Garante to remove references to “performance of a contract” for ChatGPT model training — leaving it with just two possibilities: Consent or legitimate interests.

Given the AI giant has never sought to obtain the consent of the countless millions (or even billions) of web users’ whose information it has ingested and processed for AI model building, any attempt to claim it had Europeans’ permission for the processing would seem doomed to fail. And when OpenAI revised its documentation after the Garante’s intervention last year it appeared to be seeking to rely on a claim of legitimate interest. However this legal basis still requires a data processor to allow data subjects to raise an objection — and have processing of their info stop.

How OpenAI could do this in the context of its AI chatbot is an open question. (It might, in theory, require it to withdraw and destroy illegally trained models and retrain new models without the objecting individual’s data in the training pool — but, assuming it could even identify all the unlawfully processed data on a per individual basis, it would need to do that for the data of each and every objecting EU person who told it to stop… Which, er, sounds expensive.)

Beyond that thorny issue, there is the wider question of whether the Garante will finally conclude legitimate interests is even a valid legal basis in this context.

Frankly, that looks unlikely. Because LI is not a free-for-all. It requires data processors to balance their own interests against the rights and freedoms of individuals whose data is being processed — and to consider things like whether individuals would have expected this use of their data; and the potential for it to cause them unjustified harm. (If they would not have expected it and there are risks of such harm LI will not be found to be a valid legal basis.)

The processing must also be necessary, with no other, less intrusive way for the data processor to achieve their end.

Notably, the EU’s top court has previously found legitimate interests to be an inappropriate basis for Meta to carry out tracking and profiling of individuals to run its behavioral advertising business on its social networks. So there is a big question mark over the notion of another type of AI giant seeking to justify processing people’s data at vast scale to build a commercial generative AI business — especially when the tools in question generate all sorts of novel risks for named individuals (from disinformation and defamation to identity theft and fraud, to name a few).

A spokesperson for the Garante confirmed that the legal basis for processing people’s data for model training remains in the mix of what it’s suspected ChatGPT of violating. But they did not confirm exactly which one (or more) article(s) it suspects OpenAI of breaching at this point.

The authority’s announcement today is also not yet the final word — as it will also wait to receive OpenAI’s response before taking a final decision.

Here’s the Garante’s statement (which we’ve translated from Italian using AI):

[Italian Data Protection Authority] has notified OpenAI, the company that runs the ChatGPT artificial intelligence platform, of its notice of objection for violating data protection regulations.

Following the provisional restriction of processing order, adopted by the Garante against the company on March 30, and at the outcome of the preliminary investigation carried out, the Authority considered that the elements acquired may constitute one or more unlawful acts with respect to the provisions of the EU Regulation.

OpenAI, will have 30 days to communicate its defence briefs on the alleged violations.

In defining the proceedings, the Garante will take into account the ongoing work of the special task force set up by the Board that brings together the EU Data Protection Authorities (EDPB).

OpenAI is also facing scrutiny over ChatGPT’s GDPR compliance in Poland, following a complaint last summer which focuses on an instance of the tool producing inaccurate information about a person and OpenAI’s response to that complainant. That separate GDPR probe remains ongoing.

OpenAI, meanwhile, has responded to rising regulatory risk across the EU by seeking to establish a physical base in Ireland; and announcing, in January, that this Irish entity would be the service provider for EU users’ data going forward.

Its hopes with these moves will be to gain so-called “main establishment” status in Ireland and switch to having assessment of its GDPR compliance led by Ireland’s Data Protection Commission, via the regulation’s one-stop-shop mechanism — rather than (as now) its business being potentially subject to DPA oversight from anywhere in the Union that its tools have local users.

However OpenAI has yet to obtain this status so ChatGPT could still face other probes by DPAs elsewhere in the EU. And, even if it gets the status, the Italian probe and enforcement will continue as the data processing in question predates the change to its processing structure.

The bloc’s data protection authorities have sought to coordinate on their oversight of ChatGPT by setting up a taskforce to consider how the GDPR applies to the chatbot, via the European Data Protection Board, as the Garante’s statement notes. That (ongoing) effort may, ultimately, produce more harmonized outcomes across discrete ChatGPT GDPR investigations — such as those in Italy and Poland.

However authorities remain independent and competent to issue decisions in their own markets. So, equally, there are no guarantees any of the current ChatGPT probes will arrive at the same conclusions.

ChatGPT resumes service in Italy after adding privacy disclosures and controls

Italy gives OpenAI initial to-do list for lifting ChatGPT suspension order

 

More TechCrunch

A police officer pulled over a self-driving Waymo vehicle in Phoenix after it ran a red light and pulled into a lane of oncoming traffic, according to dispatch records. The…

Waymo robotaxi pulled over by Phoenix police after driving into the wrong lane

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. This week, Figma CEO Dylan…

Figma pauses its new AI feature after Apple controversy

We’ve created this guide to help parents navigate the controls offered by popular social media companies.

How to set up parental controls on Facebook, Snapchat, TikTok and more popular sites

Featured Article

You could learn a lot from a CIO with a $17B IT budget

Lori Beer’s work is a case study for every CIO out there, most of whom will never come close to JP Morgan Chase’s scale, but who can still learn from how it goes about its business.

14 hours ago
You could learn a lot from a CIO with a $17B IT budget

For the first time, Chinese government workers will be able to purchase Tesla’s Model Y for official use. Specifically, officials in eastern China’s Jiangsu province included the Model Y in…

Tesla makes it onto Chinese government purchase list

Generative AI models don’t process text the same way humans do. Understanding their “token”-based internal environments may help explain some of their strange behaviors — and stubborn limitations. Most models,…

Tokens are a big reason today’s generative AI falls short

After multiple rejections, Apple has approved Fortnite maker Epic Games’ third-party app marketplace for launch in the EU. As now permitted by the EU’s Digital Markets Act (DMA), Epic announced…

Apple approves Epic Games’ marketplace app after initial rejections

There’s no need to worry that your secret ChatGPT conversations were obtained in a recently reported breach of OpenAI’s systems. The hack itself, while troubling, appears to have been superficial…

OpenAI breach is a reminder that AI companies are treasure troves for hackers

Welcome to Startups Weekly — TechCrunch’s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Most…

Space for newcomers, biotech going mainstream, and more

Elon Musk’s X is exploring more ways to integrate xAI’s Grok into the social networking app. According to a series of recent discoveries, X is developing new features like the…

X plans to more deeply integrate Grok’s AI, app researcher finds

We’re about four months away from TechCrunch Disrupt 2024, taking place October 28 to 30 in San Francisco! We could not bring you this world-class event without our world-class partners…

Meet Brex, Google Cloud, Aerospace and more at Disrupt 2024

In its latest step targeting a major marketplace, the European Commission sent Amazon another request for information (RFI) Friday in relation to its compliance under the bloc’s rulebook for digital…

Amazon faces more EU scrutiny over recommender algorithms and ads transparency

Quantum Rise, a Chicago-based startup that does AI-driven automation for companies like dunnhumby (a retail analytics platform for the grocery industry), has raised a $15 million seed round from Erie…

Quantum Rise grabs $15M seed for its AI-driven ‘Consulting 2.0’ startup

On July 4, YouTube released an updated eraser tool for creators so they can easily remove any copyrighted music from their videos without affecting any other audio such as dialog…

YouTube’s updated eraser tool removes copyrighted music without impacting other audio

Airtel, India’s second-largest telecom operator, on Friday denied any breach of its systems following reports of an alleged security lapse that has caused concern among its customers. The telecom group,…

India’s Airtel dismisses data breach reports amid customer concerns

According to a recent Dealroom report on the Spanish tech ecosystem, the combined enterprise value of Spanish startups surpassed €100 billion in 2023. In the latest confirmation of this upward trend, Madrid-based…

Spain’s exposure to climate change helps Madrid-based VC Seaya close €300M climate tech fund

Forestay, an emerging VC based out of Geneva, Switzerland, has been busy. This week it closed its second fund, Forestay Capital II, at a hard cap of $220 million. The…

Forestay, Europe’s newest $220M growth-stage VC fund, will focus on AI

Threads, Meta’s alternative to Twitter, just celebrated its first birthday. After launching on July 5 last year, the social network has reached 175 million monthly active users — that’s a…

A year later, what Threads could learn from other social networks

J2 Ventures, a firm led mostly by U.S. military veterans, announced on Thursday that it has raised a $150 million second fund. The Boston-based firm invests in startups whose products…

J2 Ventures, focused on military healthcare, grabs $150M for its second fund

HealthEquity said in an 8-K filing with the SEC that it detected “anomalous behavior by a personal use device belonging to a business partner.”

HealthEquity says data breach is an ‘isolated incident’

Roll20 said that on June 29 it had detected that a “bad actor” gained access to an account on the company’s administrative website for one hour.

Roll20, an online tabletop role-playing game platform, discloses data breach

Fisker has a willing buyer for its remaining inventory of all-electric Ocean SUVs, and has asked the Delaware Bankruptcy Court judge overseeing its Chapter 11 case to approve the sale.…

Fisker asks bankruptcy court to sell its EVs at average of $14,000 each

Teddy Solomon just moved to a new house in Palo Alto, so he turned to the Stanford community on Fizz to furnish his room. “Every time I show up to…

Fizz, the anonymous Gen Z social app, adds a marketplace for college students

With increasing competition for what is, essentially, still a small number of hard tech and deep tech deals, Sidney Scott realized it would be a challenge for smaller funds like…

Why deep tech VC Driving Forces is shutting down

A guide to turn off reactions on your iPhone and Mac so you don’t get surprised by effects during work video calls.

How to turn off those silly video call reactions on iPhone and Mac

Amazon has decided to discontinue its Astro for Business device, a security robot for small- and medium-sized businesses, just seven months after launch.  In an email sent to customers and…

Amazon retires its Astro for Business security robot after only 7 months

Hiya, folks, and welcome to TechCrunch’s regular AI newsletter. This week in AI, the U.S. Supreme Court struck down “Chevron deference,” a 40-year-old ruling on federal agencies’ power that required…

This Week in AI: With Chevron’s demise, AI regulation seems dead in the water

Noplace had already gone viral ahead of its public launch because of its feature that allows users to express themselves by customizing the colors of their profile.

noplace, a mashup of Twitter and Myspace for Gen Z, hits No. 1 on the App Store

Cloudflare analyzed AI bot and crawler traffic to fine-tune automatic bot detection models.

Cloudflare launches a tool to combat AI bots

Twilio says “threat actors were able to identify” phone numbers of people who use the two-factor app Authy.

Twilio says hackers identified cell phone numbers of two-factor app Authy users