AI

Why most AI benchmarks tell us so little

Comment

Yellow tailor meter, isolated on white background
Image Credits: sergeyskleznev (opens in a new window) / Getty Images

On Tuesday, startup Anthropic released a family of generative AI models that it claims achieve best-in-class performance. Just a few days later, rival Inflection AI unveiled a model that it asserts comes close to matching some of the most capable models out there, including OpenAI’s GPT-4, in quality.

Anthropic and Inflection are by no means the first AI firms to contend their models have the competition met or beat by some objective measure. Google argued the same of its Gemini models at their release, and OpenAI said it of GPT-4 and its predecessors, GPT-3, GPT-2 and GPT-1. The list goes on.

But what metrics are they talking about? When a vendor says a model achieves state-of-the-art performance or quality, what’s that mean, exactly? Perhaps more to the point: Will a model that technically “performs” better than some other model actually feel improved in a tangible way?

On that last question, not likely.

The reason — or rather, the problem — lies with the benchmarks AI companies use to quantify a model’s strengths — and weaknesses.

Esoteric measures

The most commonly used benchmarks today for AI models — specifically chatbot-powering models like OpenAI’s ChatGPT and Anthropic’s Claude — do a poor job of capturing how the average person interacts with the models being tested. For example, one benchmark cited by Anthropic in its recent announcement, GPQA (“A Graduate-Level Google-Proof Q&A Benchmark”), contains hundreds of Ph.D.-level biology, physics and chemistry questions — yet most people use chatbots for tasks like responding to emails, writing cover letters and talking about their feelings.

Jesse Dodge, a scientist at the Allen Institute for AI, the AI research nonprofit, says that the industry has reached an “evaluation crises.”

“Benchmarks are typically static and narrowly focused on evaluating a single capability, like a model’s factuality in a single domain, or its ability to solve mathematical reasoning multiple choice questions,” Dodge told TechCrunch in an interview. “Many benchmarks used for evaluation are three-plus years old, from when AI systems were mostly just used for research and didn’t have many real users. In addition, people use generative AI in many ways — they’re very creative.”

The wrong metrics

It’s not that the most-used benchmarks are totally useless. Someone’s undoubtedly asking ChatGPT Ph.D.-level math questions. However, as generative AI models are increasingly positioned as mass market, “do-it-all” systems, old benchmarks are becoming less applicable.

David Widder, a postdoctoral researcher at Cornell studying AI and ethics, notes that many of the skills common benchmarks test — from solving grade school-level math problems to identifying whether a sentence contains an anachronism — will never be relevant to the majority of users.

“Older AI systems were often built to solve a particular problem in a context (e.g. medical AI expert systems), making a deeply contextual understanding of what constitutes good performance in that particular context more possible,” Widder told TechCrunch. “As systems are increasingly seen as ‘general purpose,’ this is less possible, so we increasingly see a focus on testing models on a variety of benchmarks across different fields.”

Errors and other flaws

Misalignment with the use cases aside, there’s questions as to whether some benchmarks even properly measure what they purport to measure.

An analysis of HellaSwag, a test designed to evaluate commonsense reasoning in models, found that more than a third of the test questions contained typos and “nonsensical” writing. Elsewhere, MMLU (short for “Massive Multitask Language Understanding”), a benchmark that’s been pointed to by vendors including Google, OpenAI and Anthropic as evidence their models can reason through logic problems, asks questions that can be solved through rote memorization.

HellaSwag
Test questions from the HellaSwag benchmark.

“[Benchmarks like MMLU are] more about memorizing and associating two keywords together,” Widder said. “I can find [a relevant] article fairly quickly and answer the question, but that doesn’t mean I understand the causal mechanism, or could use an understanding of this causal mechanism to actually reason through and solve new and complex problems in unforseen contexts. A model can’t either.”

Fixing what’s broken

So benchmarks are broken. But can they be fixed?

Dodge thinks so — with more human involvement.

“The right path forward, here, is a combination of evaluation benchmarks with human evaluation,” he said, “prompting a model with a real user query and then hiring a person to rate how good the response is.”

As for Widder, he’s less optimistic that benchmarks today — even with fixes for the more obvious errors, like typos — can be improved to the point where they’d be informative for the vast majority of generative AI model users. Instead, he thinks that tests of models should focus on the downstream impacts of these models and whether the impacts, good or bad, are perceived as desirable to those impacted.

“I’d ask which specific contextual goals we want AI models to be able to be used for and evaluate whether they’d be — or are — successful in such contexts,” he said. “And hopefully, too, that process involves evaluating whether we should be using AI in such contexts.”

More TechCrunch

According to a recent Dealroom report on the Spanish tech ecosystem, the combined enterprise value of Spanish startups surpassed��€100 billion in 2023. In the latest confirmation of this upward trend, Madrid-based…

Spain’s exposure to climate change helps Madrid-based VC Seaya close €300M climate tech fund

Forestay, an emerging VC based out of Geneva, Switzerland, has been busy. This week it closed its second fund, Forestay Capital II, at a hard cap of $220 million. The…

Forestay, Europe’s newest $220M growth-stage VC fund, will focus on AI

Threads, Meta’s alternative to Twitter, just celebrated its first birthday. After launching on July 5 last year, the social network has reached 175 million monthly active users — that’s a…

A year later, what Threads could learn from other social networks

J2 Ventures, a firm led mostly by U.S. military veterans, announced on Thursday that it has raised a $150 million second fund. The Boston-based firm invests in startups whose products…

J2 Ventures, focused on military healthcare, grabs $150M for its second fund

HealthEquity said in an 8-K filing with the SEC that it detected “anomalous behavior by a personal use device belonging to a business partner.”

HealthEquity says data breach is an ‘isolated incident’

Roll20 said that on June 29 it had detected that a “bad actor” gained access to an account on the company’s administrative website for one hour.

Roll20, an online tabletop role-playing game platform, discloses data breach

Fisker has a willing buyer for its remaining inventory of all-electric Ocean SUVs, and has asked the Delaware Bankruptcy Court judge overseeing its Chapter 11 case to approve the sale.…

Fisker asks bankruptcy court to sell its EVs at average of $14,000 each

Teddy Solomon just moved to a new house in Palo Alto, so he turned to the Stanford community on Fizz to furnish his room. “Every time I show up to…

Fizz, the anonymous Gen Z social app, adds a marketplace for college students

With increasing competition for what is, essentially, still a small number of hard tech and deep tech deals, Sidney Scott realized it would be a challenge for smaller funds like…

Why deep tech VC Driving Forces is shutting down

A guide to turn off reactions on your iPhone and Mac so you don’t get surprised by effects during work video calls.

How to turn off those silly video call reactions on iPhone and Mac

Amazon has decided to discontinue its Astro for Business device, a security robot for small- and medium-sized businesses, just seven months after launch.  In an email sent to customers and…

Amazon retires its Astro for Business security robot after only 7 months

Hiya, folks, and welcome to TechCrunch’s regular AI newsletter. This week in AI, the U.S. Supreme Court struck down “Chevron deference,” a 40-year-old ruling on federal agencies’ power that required…

This Week in AI: With Chevron’s demise, AI regulation seems dead in the water

Noplace had already gone viral ahead of its public launch because of its feature that allows users to express themselves by customizing the colors of their profile.

noplace, a mashup of Twitter and Myspace for Gen Z, hits No. 1 on the App Store

Cloudflare analyzed AI bot and crawler traffic to fine-tune automatic bot detection models.

Cloudflare launches a tool to combat AI bots

Twilio says “threat actors were able to identify” phone numbers of people who use the two-factor app Authy.

Twilio says hackers identified cell phone numbers of two-factor app Authy users

The news brings closure to more than two years of volleying back and forth between some of the biggest names in additive manufacturing.

Nano Dimension is buying Desktop Metal

Planning to attend TechCrunch Disrupt 2024 with your team? Maximize your team-building time and your company’s impact across the entire conference when you bring your team. Groups of 4 to…

Groups save big at TechCrunch Disrupt 2024

As more music streaming apps and creation tools emerge to compete for users’ attention, social music-sharing app Popster is getting two new features to grow its user base: an AI…

Music video-sharing app Popster uses generative AI and lets artists remix videos

Meta’s Threads now has more than 175 million monthly active users, Mark Zuckerberg announced on Wednesday. The announcement comes two days away from Threads’ first anniversary. Zuckerberg revealed back in…

Threads nears its one-year anniversary with more than 175M monthly active users

Cartken and its diminutive sidewalk delivery robots first rolled into the world with a narrow charter: carrying everything from burritos and bento boxes to pizza and pad thai that last…

From burritos to biotech: How robotics startup Cartken found its AV niche

Ashwin Nandakumar and Ashwin Jainarayanan were working on their doctorates at adjacent departments in Oxford, but they didn’t know each other. Nandakumar, who was studying oncology, one day stumbled across…

Granza Bio grabs $7M seed from Felicis and YC to advance delivery of cancer treatments

LG has acquired an 80% stake in Athom, a Dutch smart home company and maker of the Homey smart home hub. According to LG’s announcement, it will purchase the remaining…

LG acquires smart home platform Athom to bring third-party connectivity to its ThinQ ecosytem

CoinDCX, India’s leading cryptocurrency exchange, is expanding internationally through the acquisition of BitOasis, a digital asset platform in the Middle East and North Africa, the companies said Wednesday. The Bengaluru-based…

CoinDCX acquires BitOasis in international expansion push

Collaborative document features are being made available inside Proton Drive, further extending the company’s trademark pitch of robust security.

In a major update, Proton adds privacy-safe document collaboration to Drive, its freemium E2EE cloud storage service

Telegram launched a digital currency called Stars for in-app use last month. Now, the company is expanding its use cases to paid content. The chat app is also allowing channels…

Telegram lets creators share paid content to channels

For the past couple of years, innovation has been accelerating in new materials development. And a new French startup called Altrove plans to play a role in this innovation cycle.…

Altrove uses AI models and lab automation to create new materials

The Indian social media platform Koo, which positioned itself as a competitor to Elon Musk’s X, is ceasing operations after its last-resort acquisition talks with Dailyhunt collapsed. Despite securing over…

Indian social network Koo is shutting down as buyout talks collapse

Apiday leverages AI to save time for its customers. But like legacy consultants, it also offers human expertise.

Europe is still serious about ESG, and Apiday is helping companies comply

Google totally dodges the question of how much energy is AI is using — perhaps because the answer is “way more than we’d care to say.”

Google’s environmental report pointedly avoids AI’s actual energy cost

SpaceX’s ambitious plans to launch its Starship mega-rocket up to 44 times per year from NASA’s Kennedy Space Center are causing a stir among some of its competitors. Late last…

SpaceX wants to launch up to 120 times a year from Florida — and competitors aren’t happy about it