AI

UK’s approach to AI safety lacks credibility, report warns

Comment

Image Credits: Ian Vogler / Getty Images

In recent weeks, the U.K. government has been trying to cultivate an image of itself as an international mover and shaker in the nascent field of AI safety — dropping a flashy announcement of an upcoming summit on the topic last month, along with a pledge to spend £100 million on a foundational model task force that will do “cutting-edge” AI safety research, as it tells it.

Yet the self-same government, led by U.K. prime minister and Silicon Valley superfan Rishi Sunak, has eschewed the need to pass new domestic legislation to regulate applications of AI — a stance its own policy paper on the topic brands “pro-innovation.”

It is also in the midst of passing a deregulatory reform of the national data protection framework that risks working against AI safety.

The latter is one of several conclusions by the independent research-focused Ada Lovelace Institute, a part of the Nuffield Foundation charitable trust, in a new report examining the U.K.’s approach to regulating AI that makes for diplomatic-sounding but, at times, pretty awkward reading for ministers.

The report packs a full 18 recommendations for leveling up government policy/credibility in this area — that is, if the U.K. wants to be taken seriously on the topic.

The Institute advocates for an “expansive” definition of AI safety — “reflecting the wide variety of harms that are arising as AI systems become more capable and embedded in society.” So the report is concerned with how to regulate harms that “AI systems can cause today.” Call them real-world AI harms. (Not with sci-fi-inspired theoretical possible future risks, which have been puffed up by certain high-profile figures in the tech industry of late, seemingly in a bid to attention-hack policymakers.)

For now, it’s fair to say Sunak’s government’s approach to regulating (real-world) AI safety has been contradictory — heavy on flashy, industry-led PR claiming it wants to champion safety but light on policy proposals for setting substantive rules to guard against the smorgasbord of risks and harms we know can flow from ill-judged applications of automation.

Here’s the Ada Lovelace Institute dropping the primary truth bomb:

The UK Government has laid out its ambition to make the UK an “AI superpower,” leveraging the development and proliferation of AI technologies to benefit the UK’s society and economy, and hosting a global summit in autumn 2023. This ambition will only materialise with effective domestic regulation, which will provide the platform for the UK’s future AI economy.

The report’s laundry list of recommendations goes on to make it clear the Institute sees a lot of room for improvement on the U.K.’s current approach to AI. 

Earlier this year, the government published its preferred approach to regulating AI domestically — saying it didn’t see the need for new legislation or oversight bodies at this stage. Instead the white paper offered a set of flexible principles the government suggested existing, sector specific (and/or cross-cutting) regulators should “interpret and apply to AI within their remits.” Just without any new legal powers or extra funding for also overseeing novel uses of AI.

The five principles set out in the white paper are safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. All of these sound fine on paper — but paper alone clearly isn’t going to cut it when it comes to regulating AI safety.

The U.K.’s plan to let existing regulators figure out what to do about AI with just some broad-brush principles to aim for and no new resource contrasts with that of the EU where lawmakers are busy hammering out an agreement on a risk-based framework that the bloc’s executive proposed back in 2021.

Europe takes another big step toward agreeing an AI rulebook

The U.K.’s shoestring budget approach of saddling existing, overworked regulators with new responsibilities for eyeing AI developments on their patch without any powers to enforce outcomes on bad actors doesn’t look very credible on AI safety, to put it mildly.

It doesn’t even seem a coherent strategy if you’re shooting for being pro-innovation, either — since it will demand AI developers consider a whole patchwork of sector-specific and cross-cutting legislation, drafted long before the latest AI boom. Developers may also find themselves subject to oversight by a number of different regulatory bodies (however weak sauce their attention might be, given the lack of resource and legal firepower to enforce the aforementioned principles). So, really, it looks like a recipe for uncertainty over which existing rules may apply to AI apps. (And, most probably, a patchwork of regulatory interpretations, depending on the sector, use case and oversight bodies involved, etc. Ergo, confusion and cost, not clarity.)

Even if existing U.K. regulators do quickly produce guidance on how they will approach AI — as some already are or are working to — there will still be plenty of gaps, as the Ada Lovelace Institute’s report also points out — since coverage gaps are a feature of the U.K.’s existing regulatory landscape. So the proposal to just further stretch this approach implies regulatory inconsistency getting baked in and even amplified as usage of AI scales/explodes across all sectors. 

Here’s the Institute again:

Large swathes of the UK economy are currently unregulated or only partially regulated. It is unclear who would be responsible for implementing AI principles in these contexts, which include: sensitive practices such as recruitment and employment, which are not comprehensively monitored by regulators, even within regulated sectors; public-sector services such as education and policing, which are monitored and enforced by an uneven network of regulators; activities carried out by central government departments, which are often not directly regulated, such as benefits administration or tax fraud detection; unregulated parts of the private sector, such as retail.

“AI is being deployed and used in every sector but the UK’s diffuse legal and regulatory network for AI currently has significant gaps. Clearer rights and new institutions are needed to ensure that safeguards extend across the economy,” it also suggests.

Another growing contradiction for the government’s claimed “AI leadership” position is that its bid for the country to become a global AI safety hub is being directly undermined by in-train efforts to water down domestic protections for people’s data — such as by lowering protections when they’re subject to automated decisions with a significant and/or legal impact — via the deregulatory Data Protection and Digital Information Bill (No. 2).

While the government has so far avoided the most head-banging Brexiteer suggestions for ripping up the EU-derived data protection rulebook — such as simply deleting the entirety of Article 22 (which deals with protection for automated decisions) from the U.K.’s General Data Protection Regulation — it is nonetheless forging ahead with a plan to reduce the level of protection citizens enjoy under current data protection law in various ways, despite its new ambition to make the U.K. a global AI safety hub.

“The UK GDPR — the legal framework for data protection currently in force in the UK — provides protections that are vital to protecting individuals and communities from potential AI harms. The Data Protection and Digital Information Bill (No. 2), tabled in its current form in March 2023, significantly amends these protections,” warns the Institute, pointing for example to the Bill removing a prohibition on many types of automated decisions — and instead requiring data controllers to have “safeguards in place, such as measures to enable an individual to contest the decision” — which it argues is a lower level of protection in practice.

“The reliance of the Government’s proposed framework on existing legislation and regulators makes it even more important that underlying regulation like data protection governs AI appropriately,” it goes on. “Legal advice commissioned by the Ada Lovelace Institute . . . suggests that existing automated processing safeguards may not in practice provide sufficient protection to people interacting with everyday services, like applying for a loan.”

“Taken collectively, the Bill’s changes risk further undermining the Government’s regulatory proposals for AI,” the report adds.

The Institute’s first recommendation is thus for government to rethink elements of the data protection reform bill that are “likely to undermine the safe development, deployment and use of AI, such as changes to the accountability framework.” It also recommends the government widen its review to look at existing rights and protections in U.K. law — with a view to plugging any other legislative gaps and introducing new rights and protections for people affected by AI-informed decisions where necessary.

Other recommendations in the report include introducing a statutory duty for regulators to have regard to the aforementioned principles, including “strict transparency and accountability obligations” and providing them with more funding/resources to tackle AI-related harms; exploring the introduction of a common set of powers for regulators, including an ex ante, developer-focused regulatory capability; and that the government should look at whether an AI ombudsperson should be established to support people aversely affected by AI.

The Institute also recommends the government clarify the law around AI and liability — which is another area where the EU is already streaks ahead.

On foundational model safety — an area that’s garnered particular interest and attention from the U.K. government of late, thanks to the viral buzz around generative AI tools like OpenAI’s ChatGPT — the Institute also believes the government needs to go further, recommending U.K.-based developers of foundational models should be given mandatory reporting requirements to make it easier for regulators to stay on top of a very fast-moving tech.

It even suggests that leading foundational model developers, such as OpenAI, Google DeepMind and Anthropic, should be required to provide government with notification when they (or any subprocessors they’re working with) begin large-scale training runs of new models.

“This would provide Government with an early warning of advancements in AI capabilities, allowing policymakers and regulators to prepare for the impact of these developments, rather than being caught unaware,” it suggests, adding that reporting requirements should also include information such as access to the data used to train models; results from in-house audits; and supply chain data.

Another suggestion is for the government to invest in small pilot projects to bolster its own understanding of trends in AI R&D.

Commenting on the report findings in a statement, Michael Birtwistle, associate director at the Ada Lovelace Institute, said:

The Government rightfully recognises that the UK has a unique opportunity to be a world-leader in AI regulation and the prime minister should be commended for his global leadership on this issue. However, the UK’s credibility on AI regulation rests on the Government’s ability to deliver a world-leading regulatory regime at home. Efforts towards international coordination are very welcome but they are not sufficient. The Government must strengthen its domestic proposals for regulation if it wants to be taken seriously on AI and achieve its global ambitions.

Sam Altman’s big European tour

More TechCrunch

If you’ve ever bought a sofa online, have you thought about the homes you can see in the background of the product shots? When it’s time to release a new…

Presti is using GenAI to replace costly furniture industry photo shoots

Google has become one of the latest investors in Moving Tech, the parent firm of Indian open-source ride-sharing app Namma Yatri that is quickly capturing market share from Uber and…

Google backs Indian open-source Uber rival

These messaging features, announced at WWDC 2024, will have a significant impact on how people communicate every day.

At last, Apple’s Messages app will support RCS and scheduling texts

iOS 18 will be available in the fall as a free software update.

Here are all the devices compatible with iOS 18

The tests indicate there are loopholes in TikTok’s ability to apply its parental controls and policies effectively in a situation where the teen user originally lied about their age, as…

TikTok glitch allows Shop to appear to users under 18, despite adults-only policy

Lhoopa has raised $80 million to address the lack of affordable housing in Southeast Asian markets, starting with the Philippines.

Lhoopa raises $80M to spur more affordable housing in the Philippines

Former President Donald Trump picked Ohio Senator J.D. Vance as his running mate on Monday, as he runs to reclaim the office he lost to President Joe Biden in 2020.…

Trump’s VP candidate JD Vance has long ties to Silicon Valley, and was a VC himself

Hello and welcome back to TechCrunch Space. Is it just me, or is the news cycle only accelerating this summer?!

TechCrunch Space: Space cowboys

Apple Intelligence features are not available in the developer beta, which is out now.

Without Apple Intelligence, iOS 18 beta feels like a TV show that’s waiting for the finale

Apple released the public betas for its next generation of software on the iPhone, Mac, iPad and Apple Watch on Monday. You can now test out iOS 18 and many…

Apple’s public betas for iOS 18 are here to test out

One major dissenter threatens to upend Fisker’s apparent best chance at offloading its unsold EVs, a deal that would keep the startup’s bankruptcy proceeding alive and pave the way for…

Fisker has one major objector to its Ocean SUV fire sale

Payments giant Stripe has delayed going public for so long that its major investor Sequoia Capital is getting creative to offer returns to its limited partners. The venture firm emailed…

Major Stripe investor Sequoia confirms $70B valuation, offers its investors a payday

Alphabet, Google’s parent company, is in advanced talks to acquire Wiz for $23 billion, a person close to the company told TechCrunch. The deal discussions were previously reported by The…

Google’s Kurian approached Wiz, $23B deal could take a week to land, source says

Name That Bird determines individual members of a species by identifying distinguishing characteristics that most humans would be hard-pressed to spot.

Bird Buddy’s new AI feature lets people name and identify individual birds

YouTube Music is introducing two new ways to boost song discovery on its platform. YouTube announced on Monday that it’s experimenting with an AI-generated conversational radio feature, and rolling out…

YouTube Music is testing an AI-generated radio feature and adding a song recognition tool

Tesla had internally planned to build the dedicated robotaxi and the $25,000 car, often referred to as the Model 2, on the same platform.

Elon Musk confirms Tesla ‘robotaxi’ event delayed due to design change

What this means for the space industry is that theory has become reality: The possibility of designing a habitation within a lunar tunnel is a reasonable proposition.

Moon cave! Discovery could redirect lunar colony and startup plays

Get ready for a prime week of savings at TechCrunch Disrupt 2024 with the launch of Disrupt Deal Days! From now to July 19 at 11:59 p.m. PT, we’re going…

Disrupt Deal Days are here: Prime savings for TechCrunch Disrupt 2024!

Deezer is the latest music streaming app to introduce an AI playlist feature. The company announced on Monday that a select number of paid users will be able to create…

Deezer chases Spotify and Amazon Music with its own AI playlist generator

Real-time payments are becoming commonplace for individuals and businesses, but not yet for cross-border transactions. That’s what Caliza is hoping to change, starting with Latin America. Founded in 2021 by…

Caliza lands $8.5 million to bring real-time money transfers to Latin America using USDC

Adaptive is a platform that provides tools designed to simplify payments and accounting for general construction contractors.

Adaptive builds automation tools to speed up construction payments

When VanMoof declared bankruptcy last year, it left around 5,000 customers who had preordered e-bikes in the lurch. Now VanMoof is up and running under new management, and the company’s…

How VanMoof’s new owners plan to win over its old customers

Mitti Labs aims to transform rice farming in India and other South Asian markets by reducing methane emissions by 50% and water consumption by 30%.

Mitti Labs aims to make rice farming less harmful to the climate, starting in India

This is a guide on how to check whether someone compromised your online accounts.

How to tell if your online accounts have been hacked

There is a general consensus today that generative AI is going to transform business in a profound way, and companies and individuals who don’t get on board will be quickly…

The AI financial results paradox

Google’s parent company Alphabet might be on the verge of making its biggest acquisition ever. The Wall Street Journal reports that Alphabet is in advanced talks to acquire Wiz for…

Google reportedly in talks to acquire cloud security company Wiz for $23B

Featured Article

Hank Green reckons with the power — and the powerlessness — of the creator

Hank Green has had a while to think about how social media has changed us. He started making YouTube videos in 2007 with his brother, novelist John Green, at a time when the first iPhone was in development, Myspace was still relevant and Instagram didn’t exist. Seventeen years later, posting…

Hank Green reckons with the power — and the powerlessness — of the creator

Here is a timeline of Synapse’s troubles and the ongoing impact it is having on banking consumers. 

Synapse’s collapse has frozen nearly $160M from fintech users — here’s how it happened

Featured Article

Helixx wants to bring fast-food economics and Netflix pricing to EVs

When Helixx co-founder and CEO Steve Pegg looks at Daisy — the startup’s 3D-printed prototype delivery van — he sees a second chance. And he’s pulling inspiration from McDonald’s to get there.  The prototype, which made its global debut this week at the Goodwood Festival of Speed, is an interesting proof…

Helixx wants to bring fast-food economics and Netflix pricing to EVs

Featured Article

India clings to cheap feature phones as brands struggle to tap new smartphone buyers

India is struggling to get new smartphone buyers, as millions of Indians don’t go for an upgrade and continue to be on feature phones.

India clings to cheap feature phones as brands struggle to tap new smartphone buyers