Government & Policy

EU publishes election security guidance for social media giants and others in scope of DSA

Comment

The human hand drops the ballot into the box.
Image Credits: Boris Zhitkov / Getty Images

The European Union published draft election security guidelines Tuesday aimed at the around two dozen (larger) platforms with more than 45 million+ regional monthly active users who are regulated under the Digital Services Act (DSA) and — consequently — have a legal duty to mitigate systemic risks such as political deepfakes while safeguarding fundamental rights like freedom of expression and privacy.

In-scope platforms include the likes of Facebook, Google Search, Instagram, LinkedIn, TikTok, YouTube and X.

The Commission has named elections as one of a handful of priority areas for its enforcement of the DSA on very large online platforms (VLOPs) and very large online search engines (VLOSEs). This subset of DSA-regulated companies are required to identify and mitigate systemic risks, such as information manipulation targeting democratic processes in the region, in addition to complying with the full online governance regime.

Per the EU’s election security guidance, the bloc expects regulated tech giants to up their game on protecting democratic votes and deploy capable content moderation resources in the multiple official languages spoken across the bloc — ensuring they have enough staff on hand to respond effectively to risks arising from the flow of information on their platforms and act on reports by third-party fact-checkers — with the risk of big fines for dropping the ball.

This will require platforms to pull off a precision balancing act on political content moderation — not lagging on their ability to distinguish between, for example, political satire, which should remain online as protected free speech, and malicious political disinformation, whose creators could be hoping to influence voters and skew elections.

In the latter case, the content falls under the DSA categorization of systemic risk that platforms are expected to swiftly spot and mitigate. The EU standard here requires that they put in place “reasonable, proportionate, and effective” mitigation measures for risks related to electoral processes, as well as respecting other relevant provisions of the wide-ranging content moderation and governance regulation.

The Commission has been working on the election guidelines at pace, launching a consultation on a draft version just last month. The sense of urgency in Brussels flows from upcoming European Parliament elections in June. Officials have said they will stress-test platforms’ preparedness next month. So the EU doesn’t appear ready to leave platforms’ compliance to chance, even with a hard law in place that means tech giants are risking big fines if they fail to meet Commission expectations this time around.

User controls for algorithmic feeds

Key among the EU’s election guidance aimed at mainstream social media firms and other major platforms are that they should give their users a meaningful choice over algorithmic and AI-powered recommender systems — so they are able to exert some control over the kind of content they see.

“Recommender systems can play a significant role in shaping the information landscape and public opinion,” the guidance notes. “To mitigate the risk that such systems may pose in relation to electoral processes, [platform] providers … should consider: (i.) Ensuring that recommender systems are designed and adjusted in a way that gives users meaningful choices and controls over their feeds, with due regard to media diversity and pluralism.”

Platforms’ recommender systems should also have measures to downrank disinformation targeted at elections, based on what the guidance couches as “clear and transparent methods,” such as deceptive content that’s been fact-checked as false and/or posts coming from accounts repeatedly found to spread disinformation.

Platforms must also deploy mitigations to avoid the risk of their recommender systems spreading generative AI-based disinformation (aka political deepfakes). They should also be proactively assessing their recommender engines for risks related to electoral processes and rolling out updates to shrink risks. The EU also recommends transparency around the design and functioning of AI-driven feeds and urges platforms to engage in adversarial testing, red-teaming, etc., to amp up their ability to spot and quash risks.

On GenAI the EU’s advice also urges watermarking of synthetic media — while noting the limits of technical feasibility here.

EU dials up scrutiny of major platforms over GenAI risks ahead of elections

Recommended mitigating measures and best practices for larger platforms in the 25 pages of draft guidance published today also lay out an expectation that platforms will dial up internal resourcing to focus on specific election threats, such as around upcoming election events, and putting in place processes for sharing relevant information and risk analysis.

Resourcing should have local expertise

The guidance emphasizes the need for analysis on “local context-specific risks,” in addition to member state specific/national and regional information gathering to feed the work of entities responsible for the design and calibration of risk mitigation measures. And for “adequate content moderation resources,” with local language capacity and knowledge of the national and/or regional contexts and specificities — a long-running gripe of the EU when it comes to platforms’ efforts to shrink disinformation risks.

Another recommendation is for them to reinforce internal processes and resources around each election event by setting up “a dedicated, clearly identifiable internal team” ahead of the electoral period — with resourcing proportionate to the risks identified for the election in question.

The EU guidance also explicitly recommends hiring staffers with local expertise, including language knowledge. Platforms have often sought to repurpose a centralized resource — without always seeking out dedicated local expertise.

“The team should cover all relevant expertise including in areas such as content moderation, fact-checking, threat disruption, hybrid threats, cybersecurity, disinformation and FIMI [foreign information manipulation and interference], fundamental rights and public participation and cooperate with relevant external experts, for example with the European Digital Media Observatory (EDMO) hubs and independent factchecking organisations,” the EU also writes.

The guidance allows for platforms to potentially ramp up resourcing around particular election events and de-mobilize teams once a vote is over.

It notes that the periods when extra risk mitigation measures may be needed are likely to vary, depending on the level of risks and any specific EU member state rules around elections (which can vary). But the Commission recommends that platforms have mitigations deployed and up and running at least one to six months before an electoral period, and continue at least one month after the elections.

Unsurprisingly, the greatest intensity for mitigations is expected in the period prior to the date of elections, to address risks like disinformation targeting voting procedures.

Hate speech in the frame

The EU is generally advising platforms to draw on other existing guidelines, including the Code of Practice on Disinformation and Code of Conduct on Countering Hate Speech, to identify best practices for mitigation measures. But it stipulates they must ensure users are provided with access to official information on electoral processes, such as banners, links and pop-ups designed to steer users to authoritative info sources for elections.

“When mitigating systemic risks for electoral integrity, the Commission recommends that due regard is also given to the impact of measures to tackle illegal content such as public incitement to violence and hatred to the extent that such illegal content may inhibit or silence voices in the democratic debate, in particular those representing vulnerable groups or minorities,” the Commission writes.

“For example, forms of racism, or gendered disinformation and gender-based violence online including in the context of violent extremist or terrorist ideology or FIMI targeting the LGBTIQ+ community can undermine open, democratic dialogue and debate, and further increase social division and polarization. In this respect, the Code of conduct on countering illegal hate speech online can be used as inspiration when considering appropriate action.”

It also recommends they run media literacy campaigns and deploy measures aimed at providing users with more contextual info — such as fact-checking labels; prompts and nudges; clear indications of official accounts; clear and non-deceptive labeling of accounts run by member states, third countries and entities controlled or financed by third countries; tools and info to help users assess the trustworthiness of information sources; tools to assess provenance; and establish processes to counter misuse of any of these procedures and tools — which reads like a list of stuff Elon Musk has dismantled since taking over Twitter (now X).

Notably, Musk has also been accused of letting hate speech flourish on the platform on his watch. And at the time of writing, X remains under investigation by the EU for a range of suspected DSA breaches, including in relation to content moderation requirements.

Transparency to amp up accountability

On political advertising, the guidance points platforms to incoming transparency rules in this area — advising they prepare for the legally binding regulation by taking steps to align themselves with the requirements now. (For example, by clearly labeling political ads, providing information on the sponsor behind these paid political messages, maintaining a public repository of political ads, and having systems in place to verify the identity of political advertisers.)

Elsewhere, the guidance also sets out how to deal with election risks related to influencers.

Platforms should also have systems in place enabling them to demonetize disinformation, per the guidance, and are urged to provide “stable and reliable” data access to third parties undertaking scrutiny and research of election risks. Data access for studying election risks should also be provided for free, the advice stipulates.

More generally the guidance encourages platforms to cooperate with oversight bodies, civil society experts and each other when it comes to sharing information about election security risks — urging them to establish comms channels for tips and risk reporting during elections.

For handling high-risk incidents, the advice recommends platforms establish an internal incident response mechanism that involves senior leadership and maps other relevant stakeholders within the organization to drive accountability around their election event responses and avoid the risk of buck passing.

Post-election, the EU suggests platforms conduct and publish a review of how they fared, factoring in third-party assessments (i.e., rather than just seeking to mark their own homework, as they have historically preferred, trying to put a PR gloss atop ongoing platform manipulated risks).

The election security guidelines aren’t mandatory, as such, but if platforms opt for another approach than what’s being recommended for tackling threats in this area, they have to be able to demonstrate their alternative approach meets the bloc’s standard, per the Commission.

If they fail to do that, they’re risking being found in breach of the DSA, which allows for penalties of up to 6% of global annual turnover for confirmed violations. So there’s an incentive for platforms to get with the bloc’s program on ramping up resources to address political disinformation and other info risks to elections as a way to shrink their regulatory risk. But they will still need to execute on the advice.

Further specific recommendations for the upcoming European Parliament elections, which will run June 6–9, are also set out in the EU guidance.

On a technical note, the election security guidelines remain in draft at this stage. But the Commission said formal adoption is expected in April once all language versions of the guidance are available.

EU’s draft election security guidelines for tech giants take aim at political deepfakes

More TechCrunch

These messaging features, announced at WWDC 2024, will have a significant impact on how people communicate every day.

At last, Apple’s Messages app will support RCS and scheduling texts

iOS 18 will be available in the fall as a free software update.

Here are all the devices compatible with iOS 18

The tests indicate there are loopholes in TikTok’s ability to apply its parental controls and policies effectively in a situation where the teen user originally lied about their age, as…

TikTok glitch allows Shop to appear to users under 18, despite adults-only policy

Lhoopa has raised $80 million to address the lack of affordable housing in Southeast Asian markets, starting with the Philippines.

Lhoopa raises $80M to spur more affordable housing in the Philippines

Former President Donald Trump picked Ohio Senator J.D. Vance as his running mate on Monday, as he runs to reclaim the office he lost to President Joe Biden in 2020.…

Trump’s VP candidate JD Vance has long ties to Silicon Valley, and was a VC himself

Hello and welcome back to TechCrunch Space. Is it just me, or is the news cycle only accelerating this summer?!

TechCrunch Space: Space cowboys

Apple Intelligence features are not available in the developer beta, which is out now.

Without Apple Intelligence, iOS 18 beta feels like a TV show that’s waiting for the finale

Apple released the public betas for its next generation of software on the iPhone, Mac, iPad and Apple Watch on Monday. You can now test out iOS 18 and many…

Apple’s public betas for iOS 18 are here to test out

One major dissenter threatens to upend Fisker’s apparent best chance at offloading its unsold EVs, a deal that would keep the startup’s bankruptcy proceeding alive and pave the way for…

Fisker has one major objector to its Ocean SUV fire sale

Payments giant Stripe has delayed going public for so long that its major investor Sequoia Capital is getting creative to offer returns to its limited partners. The venture firm emailed…

Major Stripe investor Sequoia confirms $70B valuation, offers its investors a payday

Alphabet, Google’s parent company, is in advanced talks to acquire Wiz for $23 billion, a person close to the company told TechCrunch. The deal discussions were previously reported by The…

Google’s Kurian approached Wiz, $23B deal could take a week to land, source says

Name That Bird determines individual members of a species by identifying distinguishing characteristics that most humans would be hard-pressed to spot.

Bird Buddy’s new AI feature lets people name and identify individual birds

YouTube Music is introducing two new ways to boost song discovery on its platform. YouTube announced on Monday that it’s experimenting with an AI-generated conversational radio feature, and rolling out…

YouTube Music is testing an AI-generated radio feature and adding a song recognition tool

Tesla had internally planned to build the dedicated robotaxi and the $25,000 car, often referred to as the Model 2, on the same platform.

Elon Musk confirms Tesla ‘robotaxi’ event delayed due to design change

What this means for the space industry is that theory has become reality: The possibility of designing a habitation within a lunar tunnel is a reasonable proposition.

Moon cave! Discovery could redirect lunar colony and startup plays

Get ready for a prime week of savings at TechCrunch Disrupt 2024 with the launch of Disrupt Deal Days! From now to July 19 at 11:59 p.m. PT, we’re going…

Disrupt Deal Days are here: Prime savings for TechCrunch Disrupt 2024!

Deezer is the latest music streaming app to introduce an AI playlist feature. The company announced on Monday that a select number of paid users will be able to create…

Deezer chases Spotify and Amazon Music with its own AI playlist generator

Real-time payments are becoming commonplace for individuals and businesses, but not yet for cross-border transactions. That’s what Caliza is hoping to change, starting with Latin America. Founded in 2021 by…

Caliza lands $8.5 million to bring real-time money transfers to Latin America using USDC

Adaptive is a platform that provides tools designed to simplify payments and accounting for general construction contractors.

Adaptive builds automation tools to speed up construction payments

When VanMoof declared bankruptcy last year, it left around 5,000 customers who had preordered e-bikes in the lurch. Now VanMoof is up and running under new management, and the company’s…

How VanMoof’s new owners plan to win over its old customers

Mitti Labs aims to transform rice farming in India and other South Asian markets by reducing methane emissions by 50% and water consumption by 30%.

Mitti Labs aims to make rice farming less harmful to the climate, starting in India

This is a guide on how to check whether someone compromised your online accounts.

How to tell if your online accounts have been hacked

There is a general consensus today that generative AI is going to transform business in a profound way, and companies and individuals who don’t get on board will be quickly…

The AI financial results paradox

Google’s parent company Alphabet might be on the verge of making its biggest acquisition ever. The Wall Street Journal reports that Alphabet is in advanced talks to acquire Wiz for…

Google reportedly in talks to acquire cloud security company Wiz for $23B

Featured Article

Hank Green reckons with the power — and the powerlessness — of the creator

Hank Green has had a while to think about how social media has changed us. He started making YouTube videos in 2007 with his brother, novelist John Green, at a time when the first iPhone was in development, Myspace was still relevant and Instagram didn’t exist. Seventeen years later, posting…

Hank Green reckons with the power — and the powerlessness — of the creator

Here is a timeline of Synapse’s troubles and the ongoing impact it is having on banking consumers. 

Synapse’s collapse has frozen nearly $160M from fintech users — here’s how it happened

Featured Article

Helixx wants to bring fast-food economics and Netflix pricing to EVs

When Helixx co-founder and CEO Steve Pegg looks at Daisy — the startup’s 3D-printed prototype delivery van — he sees a second chance. And he’s pulling inspiration from McDonald’s to get there.  The prototype, which made its global debut this week at the Goodwood Festival of Speed, is an interesting proof…

Helixx wants to bring fast-food economics and Netflix pricing to EVs

Featured Article

India clings to cheap feature phones as brands struggle to tap new smartphone buyers

India is struggling to get new smartphone buyers, as millions of Indians don’t go for an upgrade and continue to be on feature phones.

India clings to cheap feature phones as brands struggle to tap new smartphone buyers

Roboticists at The Faboratory at Yale University have developed a way for soft robots to replicate some of the more unsettling things that animals and insects can accomplish — say,…

Meet the soft robots that can amputate limbs and fuse with other robots

Featured Article

If you’re an AT&T customer, your data has likely been stolen

This week, AT&T confirmed it will begin notifying around 110 million AT&T customers about a data breach that allowed cybercriminals to steal the phone records of “nearly all” of its customers. The stolen data contains phone numbers and AT&T records of calls and text messages during a six-month period in…

If you’re an AT&T customer, your data has likely been stolen