AI

Europe’s digital rules reboot could tame Facebook, whistleblower Frances Haugen tells EU Parliament

Comment

US Facebook whistleblower Frances Haugen pictured during a hearing of the Internal Market and Consumer Protection Committee of the European Parliament in Brussels
Image Credits: BENOIT DOPPAGNE/BELGA MAG/AFP / Getty Images under a license.

In her latest turn in front of a phalanx of lawmakers, Facebook whistleblower Frances Haugen gave a polished testimony to the European Parliament on Monday — following similar sessions in front of U.K. and U.S. legislators in recent weeks.

Her core message was the same dire warning she’s sounded on both sides of the Atlantic: Facebook prioritizes profit over safety, choosing to ignore the amplification of toxic content that’s harmful to individuals, societies and democracy. And that regulatory oversight is thus essential to rein in and make such irresponsibly operated platform power accountable — with no time for lawmakers to lose in imposing rules on social media.

The (to date) highest profile Facebook whistleblower got a very warm reception from the European Parliament, where MEPs were universally effusive in thanking her for her time — and what they couched as her “bravery” in raising her concerns publicly — applauding Haugen before she spoke and again at the end of the nearly three hour presentation plus Q&A session.

They questioned her on a range of issues — giving over the largest share of their attention to how incoming pan-EU digital regulations can best deliver effective transparency and accountability on slippery platform giants.

The Digital Services Act (DSA) is front of mind for MEPs as they are considering and voting on amendments to the Commission’s proposal that could seriously reshape the legislation.

Such as a push by some MEPs to get an outright ban on behavioral advertising added to the legislation in favor of privacy-safe alternatives like contextual ads. Or another amendment that’s recently gained some backing — pushing to exempt news media from platform content takedowns.

Turns out Haugen isn’t a fan of either of those potential amendments. But she spoke up in favor of the regulation as a whole.

Europe lays out its plan to reboot digital rules and tame tech giants

The general thrust of the DSA is aimed at achieving a trusted and safe online environment — and a number of MEPs speaking during today’s session spied a soapboxing opportunity to toot the EU’s horn for being so advanced as to have a digital regulation not just on the table but advancing rapidly toward adoption slap-bang in the midst of (yet) another Facebook publicity crisis — with the glare of the global spotlight on Haugen speaking to the European Parliament.

The Facebook whistleblower was happy to massage political egos, telling MEPs that she’s “grateful” the EU is taking platform regulation seriously — and suggesting there’s an opportunity for the bloc to set a “global gold standard” with the DSA.

Although she used a similar line in the U.K. parliament during another evidence session last month, where she talked up domestic online safety legislation in similarly glowing tones.

To MEPs, Haugen repeated her warning to U.K. lawmakers that Facebook is exceptionally adept at “dancing with data” — impressing on them that they too must not pass naive laws that simply require the tech giant to hand over data about what’s happening on its platform. Rather Facebook must be made to explain any data sets it hands over, down to the detail of the queries it uses to pull data and generate oversight audits.

Without such a step in legislation, Haugen warned that shiny new EU digital rules will arrive with a massive loophole baked in for Facebook to dance through by serving up selectively self-serving data — running whatever queries it needs to paint the picture to get the tick in the box.

For regulation to be effective on platforms as untrustworthy as Facebook, she suggested it must be multitiered, dynamic and take continuous input from a broader ecosystem of civil society organizations and external researchers — to stay on top of emergent harms and ensure the law is actually doing the job intended.

It should also take a broad view of oversight, she urged — providing platform data to a wider circle of external experts than merely just the “vetted academics” of the current DSA proposal in order to really deliver the sought-for accountability around AI-fuelled impacts.

“Facebook has shown that they will lie with data,” she told the European Parliament. “I encourage you to put in the DSA; if Facebook gives you data they should have to show you how they got it … It’s really, really important that they should have to disclose the process, the queries, the notebooks they used to pull this data because you can’t trust anything they give you unless you can confirm that.”

Haugen didn’t just sound the alarm; she layered on the flattery, too — telling MEPs that she “strongly believe[s] that Europe has a critical role to play in regulating these platforms because you are a vibrant, linguistically diverse democracy.”

“If you get the DSA right for your linguistically and ethnically diverse, 450 million EU citizens you can create a game-changer for the world — you can force platforms to price in societal risk to their business operations so that the decisions about what products to build and how to build them is not purely based on profit maximization. You can establish systemic rules and standards that address risks while protecting free speech and you can show the world how transparency, oversight and enforcement should work.”

“There’s a deep, deep need to make sure that platforms must disclose what safety systems they have, what languages those safety systems are in and a performance per language — and that’s the kind of thing where you can put in the DSA,” she went on, fleshing out her case for comprehensive disclosure requirements. “You can say: You need to be honest with us on is this actually dangerous for a large fraction of Europeans?”

Such an approach would have benefits that scale beyond Europe, per Haugen — by forcing Facebook “toward language-neutral content-neutral solutions,” which she argued are needed to tackle harms across all the markets and languages where the platform operates.

The skew in how much of Facebook’s (limited) safety budget gets directed toward English-speaking markets — and/or to the handful of markets where it’s afraid of regulation — is one of the core issues amplified by her leaking of so many internal Facebook documents. And she suggested Europe could help tackle this lack of global equity around how powerful platforms operate (and what they choose to prioritize or de-prioritize) by enforcing context-specific transparency around Facebook’s AI models — requiring not just a general measure of performance but specifics per market; per language; per safety system; even per cohort of heavily targeted users.

Forcing Facebook to address safety as a systemic requirement would not only solve problems the platform causes in markets across Europe but it would “speak up for people who live in fragile places in the world that don’t have as much influence,” she argued, adding: “The places in the world that have the most linguistic diversity are often the most fragile places and they need Europe to step in — because you guys have influence and you can really help them.”

Facebook whistleblower Frances Haugen testifies before the Senate

While many of Haugen’s talking points were familiar from her earlier testimony sessions and press interviews, during the Q&A a number of EU lawmakers sought to engage her on whether Facebook’s problem with toxic content amplification might be tackled by an outright ban on microtargeted/behavioral advertising — an active debate in the parliament — so that the adtech giant can no longer use people’s information against them to profit through data-driven manipulation.

On this, Haugen demurred — saying she supports people being able to choose ad targeting (or no ad targeting) themselves, rather than regulators deciding.

Instead of an outright ban she suggested that “specific things and ads … really need to be regulated” — pointing to ad rates as one area she would target for regulation. “Given the current system subsidizes hate — it’s 5x to 10x cheaper to run a political ad that’s hateful than a non-hateful ad — I think you need to have flat rates for ads,” she said on that. “But I also think there should be regulation on targeting ads to specific people.

“I don’t know if you’re aware of this but you can target specific ads to an audience of 100 people. And I’m pretty sure that is being misused because I did an analysis on who is hyperexposed to political ads and unsurprisingly the people who are most exposed are in Washington, D.C. and they are radically overexposed — we’re talking thousands of political ads a month. So I do think having mechanisms to target specific people without their knowledge … is unacceptable.”

Haugen also argued for a ban on Facebook being able to use third-party data sources to enrich the profiles it holds on people for ad targeting purposes.

“With regard to profiling and data retention I think you shouldn’t be allowed to take third-party data sources — something Facebook does, they work with credit card companies, other forms — and it makes their ads radically more profitable,” she said, adding: “I think you should have to consent to every time you hook up more data sources. Because I think people would feel really uncomfortable if they knew that Facebook had some of the data they do.”

But on behavioral ad targeting she studiously avoided supporting an outright ban.

It was an interesting wrinkle during the session, given there is momentum on the issue within the EU — including as a result of her own whistleblowing amplifying regional lawmakers’ concerns about Facebook — and Haugen could have helped stoke that (but opted not to).

Inside a European push to outlaw creepy ads

“With regard to targeted ads, I’m a strong proponent that people should be allowed to make choices with regard to how they are targeted — and I encourage prohibiting dark patterns that force people into opting into those things,” she said during one response (but without going into detail on exactly how regulators could draft a law that’s effective against something as cynically multifaceted as “dark pattern design”).

“Platforms should have to be transparent about how they use that data,” was all she offered, before falling back on reiterating: “I’m a big proponent that they should also have to publish policies like do they give flat ad rates for all political ads because you shouldn’t be subsidizing hate in political ads.”

Her argument against banning behavioral ads seemed to boil down to (or rather hinge on) regulators achieving fully comprehensive platform transparency — that’s able to provide an accurate picture of what Facebook (et al.) actually does with people’s data — i.e., in order that users can then make a genuine choice over whether they want such targeting or not. So it hinges on full-picture accountability.

Yet during another point in the session — after she had been asked whether children can really consent to data processing by platforms like Facebook — Haugen argued it’s doubtful that adults can (currently) understand what Facebook is doing with their data, let alone kids.

“With regard to can children understand what they’re trading away, I think almost certainly we as adults — we don’t know what we’ve traded away,” she told MEPs. “We don’t know what goes in the algorithms, we don’t know how we’re targeted so the idea that children can given informed consent — I don’t think we give informed consent and they have less capability.”

Given that, her faith that such comprehensive transparency is possible — and will paint a universally comprehensible picture of data-driven manipulation that allows all adults to make a truly informed decision to accept manipulative behavior ads (or not) — looks, well, rather tenuous.

If we follow Haugen’s logic, were the suggested cure of radical transparency to fail — including by regulator’s improperly/inaccurately communicating everything that’s been found to users and/or failing to ensure users are appropriately and universally educated regarding their risks and rights — well the risk is, surely, that data-drive exploitation will continue (just now with a free pass baked into legislation).

Her argument here felt like it lacked coherence. As if her opposition to banning behavioral ads — and, therefore, to tackling one core incentive that’s fuelling social media’s manipulative toxicity — was rather more ideological than logical. 

(Certainly it looks like quite the leap of faith in governments around the world being able to scramble into place the kind of high functioning, “full-fat” oversight Haugen suggests is needed — even as, simultaneously, she’s spent weeks impressing on lawmakers that platforms can only be understood as highly contex- specific and devilishly data-detailed algorithm machines; Not to mention the sheer scale of the task at hand, even just given Facebook’s “amazing” amounts of data, as she put it in the Q&A today, suggesting that if regulators were handed Facebook data in raw form it would be far too overwhelming for them.)

This is also perhaps exactly the perspective you’d expect from a data scientist, not a rights expert.

(Ditto her quick dismissal of banning behavioral ads is the sort of trigger reaction you’d expect from a platform insider whose expertise comes from having been privy to the black boxes and focused on manipulating algorithms and data versus being outside the machine where the harms flow and are felt.)

At another point during the session Haugen further complicated her advocacy for radical transparency as the sole panacea for social media’s ills — warning against the EU leaving enforcement of such complex matters up to 27 national agencies.

Were the EU to do that she suggested it would doom the DSA to fail. Instead she advised lawmakers to create a central EU bureaucracy to deal with enforcing the highly detailed, layered and dynamic rules she says are needed to wrap Facebook-level platforms — going so far as to suggest that ex-industry algorithm experts like herself might find a “home” there, chipping in to help with their specialist knowledge and “giv[ing] back by contributing to public accountability.”

“The number of formal experts in these things — how the algorithms really work and the consequences of them — there are very, very few in the world. Because you can’t get a master’s degree in it, you can’t get a Ph.D. in it, you have to go work for one of these companies and be trained up internally,” she suggested, adding: “I sincerely worry that if you delegate this functionality to 27 member states you will not be able to get critical mass in any one place.

“It’ll be very, very difficult to get enough experts and distribute them that broadly.”

With so many warnings to lawmakers about the need to nail down devilish details in self-serving data sets and “fragile” AIs, in order to prevent platforms from simply carrying on pulling the wool over everyone’s eyes, it seems instructive that Haugen should be so opposed to regulators actually choosing to set some simple limits — such as no personal data for ads.

She was also asked directly by MEPs on whether regulators should put limits on what platforms can do with data and/or limits on the inputs it can use for algorithms. Again her preference in response to the questions was for transparency — not limits. (Although elsewhere, and as noted above, she did at least call for a ban on Facebook buying third-party data sets to enrich its ad profiling.)

Ultimately, then, the ideology of the algorithm expert may have a few blind spots when it comes to thinking outside the black box for ways to come up with effective regulation for data-driven software machines.

Some hard stops might actually be just what’s needed for democratic societies to wrest back control from data-mining tech giants.

Haugen’s best advocacy may therefore be her highly detailed warnings around the risk of loopholes fatally scuttling digital regulation. She is undoubtedly correct that here the risks are multitudinous.

Earlier in her presentation she raised another possible loophole — pushing lawmakers not to exempt news media content from the DSA (which is another potential amendment MEPs are mulling). “If you’re going to make content neutral rules, then they must really be neutral,” she argued. “Nothing is singled out and nothing is exempted.

“Every modern disinformation campaign will exploit news media channels on digital platforms by gaming the system,” she warned. “If the DSA makes it illegal for platforms to address these issues we risk undermining the effectiveness of the law — indeed we may be worse off than today’s situation.”

During the Q&A, Haugen also faced a couple of questions from MEPs on new challenges that will arise for regulators in light of Facebook’s planned pivot to building the so-called “metaverse.”

On this she told lawmakers she’s “extremely concerned” — warning of the increased data gathering that could flow from the proliferation of metaverse-feeding sensors in homes and offices.

She also raised concerns that Facebook’s focus on building workplace tools might result in a situation in which opting out is not even an option, given that employees typically have little say over business tools — suggesting people may face a dystopic future choice between Facebook’s ad profiling or being able to earn a living.

Facebook’s fresh focus on “the metaverse” illustrates what Haugen dubbed a “meta problem” for Facebook — aka: That its preference is “to move on” rather than stop and fix the problems created by its current technology.

Regulators must throw the levers that force the juggernaut to plot a new, safety-focused course, she urged.

Facebook whistleblower, Frances Haugen, raises trust and security questions over its e2e encryption

More TechCrunch

If you’ve ever bought a sofa online, have you thought about the homes you can see in the background of the product shots? When it’s time to release a new…

Presti is using GenAI to replace costly furniture industry photo shoots

Google has joined investors backing Moving Tech, the parent firm of open-source ride-sharing app Namma Yatri in India that is eroding market share from Uber and Ola with its no-commission…

Google backs Indian open-source Uber rival

These messaging features, announced at WWDC 2024, will have a significant impact on how people communicate every day.

At last, Apple’s Messages app will support RCS and scheduling texts

iOS 18 will be available in the fall as a free software update.

Here are all the devices compatible with iOS 18

The tests indicate there are loopholes in TikTok’s ability to apply its parental controls and policies effectively in a situation where the teen user originally lied about their age, as…

TikTok glitch allows Shop to appear to users under 18, despite adults-only policy

Lhoopa has raised $80 million to address the lack of affordable housing in Southeast Asian markets, starting with the Philippines.

Lhoopa raises $80M to spur more affordable housing in the Philippines

Former President Donald Trump picked Ohio Senator J.D. Vance as his running mate on Monday, as he runs to reclaim the office he lost to President Joe Biden in 2020.…

Trump’s VP candidate JD Vance has long ties to Silicon Valley, and was a VC himself

Hello and welcome back to TechCrunch Space. Is it just me, or is the news cycle only accelerating this summer?!

TechCrunch Space: Space cowboys

Apple Intelligence features are not available in the developer beta, which is out now.

Without Apple Intelligence, iOS 18 beta feels like a TV show that’s waiting for the finale

Apple released the public betas for its next generation of software on the iPhone, Mac, iPad and Apple Watch on Monday. You can now test out iOS 18 and many…

Apple’s public betas for iOS 18 are here to test out

One major dissenter threatens to upend Fisker’s apparent best chance at offloading its unsold EVs, a deal that would keep the startup’s bankruptcy proceeding alive and pave the way for…

Fisker has one major objector to its Ocean SUV fire sale

Payments giant Stripe has delayed going public for so long that its major investor Sequoia Capital is getting creative to offer returns to its limited partners. The venture firm emailed…

Major Stripe investor Sequoia confirms $70B valuation, offers its investors a payday

Alphabet, Google’s parent company, is in advanced talks to acquire Wiz for $23 billion, a person close to the company told TechCrunch. The deal discussions were previously reported by The…

Google’s Kurian approached Wiz, $23B deal could take a week to land, source says

Name That Bird determines individual members of a species by identifying distinguishing characteristics that most humans would be hard-pressed to spot.

Bird Buddy’s new AI feature lets people name and identify individual birds

YouTube Music is introducing two new ways to boost song discovery on its platform. YouTube announced on Monday that it’s experimenting with an AI-generated conversational radio feature, and rolling out…

YouTube Music is testing an AI-generated radio feature and adding a song recognition tool

Tesla had internally planned to build the dedicated robotaxi and the $25,000 car, often referred to as the Model 2, on the same platform.

Elon Musk confirms Tesla ‘robotaxi’ event delayed due to design change

What this means for the space industry is that theory has become reality: The possibility of designing a habitation within a lunar tunnel is a reasonable proposition.

Moon cave! Discovery could redirect lunar colony and startup plays

Get ready for a prime week of savings at TechCrunch Disrupt 2024 with the launch of Disrupt Deal Days! From now to July 19 at 11:59 p.m. PT, we’re going…

Disrupt Deal Days are here: Prime savings for TechCrunch Disrupt 2024!

Deezer is the latest music streaming app to introduce an AI playlist feature. The company announced on Monday that a select number of paid users will be able to create…

Deezer chases Spotify and Amazon Music with its own AI playlist generator

Real-time payments are becoming commonplace for individuals and businesses, but not yet for cross-border transactions. That’s what Caliza is hoping to change, starting with Latin America. Founded in 2021 by…

Caliza lands $8.5 million to bring real-time money transfers to Latin America using USDC

Adaptive is a platform that provides tools designed to simplify payments and accounting for general construction contractors.

Adaptive builds automation tools to speed up construction payments

When VanMoof declared bankruptcy last year, it left around 5,000 customers who had preordered e-bikes in the lurch. Now VanMoof is up and running under new management, and the company’s…

How VanMoof’s new owners plan to win over its old customers

Mitti Labs aims to transform rice farming in India and other South Asian markets by reducing methane emissions by 50% and water consumption by 30%.

Mitti Labs aims to make rice farming less harmful to the climate, starting in India

This is a guide on how to check whether someone compromised your online accounts.

How to tell if your online accounts have been hacked

There is a general consensus today that generative AI is going to transform business in a profound way, and companies and individuals who don’t get on board will be quickly…

The AI financial results paradox

Google’s parent company Alphabet might be on the verge of making its biggest acquisition ever. The Wall Street Journal reports that Alphabet is in advanced talks to acquire Wiz for…

Google reportedly in talks to acquire cloud security company Wiz for $23B

Featured Article

Hank Green reckons with the power — and the powerlessness — of the creator

Hank Green has had a while to think about how social media has changed us. He started making YouTube videos in 2007 with his brother, novelist John Green, at a time when the first iPhone was in development, Myspace was still relevant and Instagram didn’t exist. Seventeen years later, posting…

Hank Green reckons with the power — and the powerlessness — of the creator

Here is a timeline of Synapse’s troubles and the ongoing impact it is having on banking consumers. 

Synapse’s collapse has frozen nearly $160M from fintech users — here’s how it happened

Featured Article

Helixx wants to bring fast-food economics and Netflix pricing to EVs

When Helixx co-founder and CEO Steve Pegg looks at Daisy — the startup’s 3D-printed prototype delivery van — he sees a second chance. And he’s pulling inspiration from McDonald’s to get there.  The prototype, which made its global debut this week at the Goodwood Festival of Speed, is an interesting proof…

Helixx wants to bring fast-food economics and Netflix pricing to EVs

Featured Article

India clings to cheap feature phones as brands struggle to tap new smartphone buyers

India is struggling to get new smartphone buyers, as millions of Indians don’t go for an upgrade and continue to be on feature phones.

India clings to cheap feature phones as brands struggle to tap new smartphone buyers