Featured Article

The emerging types of language models and why they matter

Comment

Natural Language Processing concept. Business communication vector illustration
Image Credits: Ledi Nuge / Getty Images

AI systems that understand and generate text, known as language models, are the hot new thing in the enterprise. A recent survey found that 60% of tech leaders said that their budgets for AI language technologies increased by at least 10% in 2020 while 33% reported a 30% increase.

But not all language models are created equal. Several types are emerging as dominant, including large, general-purpose models like OpenAI’s GPT-3 and models fine-tuned for particular tasks (think answering IT desk questions). At the edge exists a third category of model — one that tends to be highly compressed in size and limited to few capabilities, designed specifically to run on Internet of Things devices and workstations.

These different approaches have major differences in strengths, shortcomings and requirements — here’s how they compare and where you can expect to see them deployed over the next year or two.

Large language models

Large language models are, generally speaking, tens of gigabytes in size and trained on enormous amounts of text data, sometimes at the petabyte scale. They’re also among the biggest models in terms of parameter count, where a “parameter” refers to a value the model can change independently as it learns. Parameters are the parts of the model learned from historical training data and essentially define the skill of the model on a problem, such as generating text.

“Large models are used for zero-shot scenarios or few-shot scenarios where little domain-[tailored] training data is available and usually work okay generating something based on a few prompts,” Fangzheng Xu, a Ph.D. student at Carnegie Mellon specializing in natural language processing, told TechCrunch via email. In machine learning, “few-shot” refers to the practice of training a model with minimal data, while “zero-shot” implies that a model can learn to recognize things it hasn’t explicitly seen during training.

“A single large model could potentially enable many downstream tasks with little training data,” Xu continued.

The usage of large language models models has grown dramatically over the past several years as researchers develop newer — and bigger — architectures. In June 2020, AI startup OpenAI released GPT-3, a 175 billion-parameter model that can generate text and even code given a short prompt containing instructions. Open research group EleutherAI subsequently made available GPT-J, a smaller (6 billion parameters) but nonetheless capable language model that can translate between languages, write blog posts, complete code and more. More recently, Microsoft and Nvidia open sourced a model dubbed Megatron-Turing Natural Language Generation (MT-NLG), which is among the largest models for reading comprehension and natural language inference developed to date at 530 billion parameters.

“One reason these large language models remain so remarkable is that a single model can be used for tasks” including question answering, document summarization, text generation, sentence completion, translation and more, Bernard Koch, a computational social scientist at UCLA, told TechCrunch via email. “A second reason is because their performance continues to scale as you add more parameters to the model and add more data … The third reason that very large pre-trained language models are remarkable is that they appear to be able to make decent predictions when given just a handful of labeled examples.”

Startups including Cohere and AI21 Labs also offer models akin to GPT-3 through APIs. Other companies, particularly tech giants like Google, have chosen to keep the large language models they’ve developed in house and under wraps. For example, Google recently detailed — but declined to release — a 540 billion-parameter model called PaLM that the company claims achieves state-of-the-art performance across language tasks.

Large language models, open source or no, all have steep development costs in common. A 2020 study from AI21 Labs pegged the expenses for developing a text-generating model with only 1.5 billion parameters at as much as $1.6 million. Inference — actually running the trained model — is another drain. One source estimates the cost of running GPT-3 on a single AWS instance (p3dn.24xlarge) at a minimum of $87,000 per year.

“Large models will get larger, more powerful, versatile, more multimodal and cheaper to train. Only Big Tech and extremely well-funded startups can play this game,” Vu Ha, a technical director at the AI2 Incubator, told TechCrunch via email. “Large models are great for prototyping, building novel proof-of-concepts and assessing technical feasibility. They are rarely the right choice for real-world deployment due to cost. An application that processes tweets, Slack messages, emails and such on a regular basis would become cost prohibitive if using GPT-3.”

Large language models will continue to be the standard for cloud services and APIs, where versatility and enterprise access are of more importance than latency. But despite recent architectural innovations, these types of language models will remain impractical for the majority of organizations, whether academia, the public or the private sector.

Fine-tuned language models

Fine-tuned models are generally smaller than their large language model counterparts. Examples include OpenAI’s Codex, a direct descendant of GPT-3 fine-tuned for programming tasks. While still containing billions of parameters, Codex is both smaller than OpenAI and better at generating — and completing — strings of computer code.

Fine-tuning can improve a models’ ability to perform a task, for example answering questions or generating protein sequences (as in the case of Salesforce’s ProGen). But it can also bolster a model’s understanding of certain subject matter, like clinical research.

“Fine-tuned … models are good for mature tasks with lots of training data,” Xu said. “Examples include machine translation, question answering, named entity recognition, entity linking [and] information retrieval.”

The advantages don’t stop there. Because fine-tuned models are derived from existing language models, fine-tuned models don’t take nearly as much time — or compute — to train or run. (Larger models like those mentioned above may take weeks or require far more computational power to train in days.) They also don’t require as much data as large language models. GPT-3 was trained on 45 terabytes of text versus the 159 gigabytes on which Codex was trained.

Fine-tuning has been applied to many domains, but one especially strong, recent example is OpenAI’s InstructGPT. Using a technique called “reinforcement learning from human feedback,” OpenAI collected a data set of human-written demonstrations on prompts submitted to the OpenAI API and prompts written by a team of human data labelers. They leveraged these data sets to create fine-tuned offshoots of GPT-3 that — in addition to being a hundredth the size of GPT-3 — are demonstrably less likely to generate problematic text while closely aligning with a user’s intent.

In another demonstration of the power of fine-tuning, Google researchers in February published a study claiming that a model far smaller than GPT-3 — fine-tuned language net (FLAN) — bests GPT-3 “by a large margin” on a number of challenging benchmarks. FLAN, which has 137 billion parameters, outperformed GPT-3 on 19 out of the 25 tasks the researchers tested it on and even surpassed GPT-3’s performance on 10 tasks.

“I think fine-tuning is probably the most widely used approach in industry right now, and I don’t see that changing in the short term. For now, fine-tuning on smaller language models allows users more control to solve their specialized problems using their own domain-specific data,” Koch said. “Instead of distributing [very large language] models that users can fine tune on their own, companies are commercializing few-shot learning through API prompts where you can give the model short prompts and examples.”

Edge language models

Edge models, which are purposefully small in size, can take the form of fine-tuned models — but not always. Sometimes, they’re trained from scratch on small data sets to meet specific hardware constraints (e.g., phone or local web server hardware). In any case, edge models — while limited in some respects — offer a host of benefits that large language models can’t match.

Cost is a major one. With an edge model that runs offline and on-device, there aren’t any cloud usage fees to pay. (Even fine-tuned models are often too large to run on local machines; MT-NLG can take over a minute to generate text on a desktop processor.) Tasks like analyzing millions of tweets may rack up thousands of dollars in fees on popular cloud-based models.

Edge models also offer greater privacy than their internet-bound counterparts, in theory, because they don’t need to transmit or analyze data in the cloud. They’re also faster — a key advantage for applications like translation. Apps such as Google Translate rely on edge models to deliver offline translations.

“Edge computing is likely to be deployed in settings where immediate feedback is needed … In general, I would think these are scenarios where humans are interacting conversationally with AI or robots or something like self-driving cars reading road signs,” Koch said. “As a hypothetical example, Nvidia has a demo where an edge chatbot has a conversation with clients at a fast food restaurant. A final use case might be automated note taking in electronic medical records. Processing conversation quickly in these situations is essential.”

Of course, small models can’t accomplish everything that large models can. They’re bound by the hardware found in edge devices, which ranges from single-core processors to GPU-equipped systems-on-chips. Moreover, some research suggests that the techniques used to develop them can amplify unwanted characteristics, like algorithmic bias.

“[There’s usually a] trade off between power usage and predictive power. Also, mobile device computation is not really increasing at the same pace as distributed high-performance computing clusters, so the performance may lag behind more and more,” Xu said.

Looking to the future

As large, fine-tuned and edge language models continue to evolve with new research, they’re likely to encounter roadblocks on the path to wider adoption. For example, while fine-tuning models requires less data compared to training a model from scratch, fine-tuning still requires a dataset. Depending on the domain — e.g., translating from a little-spoken language — the data might not exist.

“The disadvantage of fine-tuning is that it still requires a fair amount of data. The disadvantage of few-shot learning is that it doesn’t work as well as fine-tuning, and that data scientists and machine learning engineers have less control over the model because they are only interacting with it through an API,” Koch continued. “And the disadvantages of edge AI are that complex models cannot fit on small devices, so performance is strictly worse than models that can fit on a single desktop GPU — much less cloud-based large language models distributed across tens of thousands of GPUs.”

Xu notes that all language models, regardless of size, remain understudied in certain important aspects. She hopes that areas like explainability and interpretability — which aim to understand how and why a model works and expose this information to users — receive greater attention and investment in the future, particularly in “high-stake” domains like medicine.

“Provenance is really an important next step that these models should have,” Xu said. “In the future, there will be more and more efficient fine-tuning techniques … to accommodate the increasing cost of fine-tuning a larger model in whole. Edge models will continue to be important, as the larger the model, the more research and development is needed to distill or compress the model to fit on edge devices.”

More TechCrunch

iOS 18 will be available in the fall as a free software update.

Here are all the devices compatible with iOS 18

The tests indicate there are loopholes in TikTok’s ability to apply its parental controls and policies effectively in a situation where the teen user originally lied about their age, as…

TikTok glitch allows Shop to appear to users under 18, despite adults-only policy

Lhoopa has raised $80 million to address the lack of affordable housing in Southeast Asian markets, starting with the Philippines.

Lhoopa raises $80M to spur more affordable housing in the Philippines

Former President Donald Trump picked Ohio Senator J.D. Vance as his running mate on Monday, as he runs to reclaim the office he lost to President Joe Biden in 2020.…

Trump’s VP candidate JD Vance has long ties to Silicon Valley, and was a VC himself

Hello and welcome back to TechCrunch Space. Is it just me, or is the news cycle only accelerating this summer?!

TechCrunch Space: Space cowboys

Apple Intelligence features are not available in the developer beta, which is out now.

Without Apple Intelligence, iOS 18 beta feels like a TV show that’s waiting for the finale

Apple released the public betas for its next generation of software on the iPhone, Mac, iPad and Apple Watch on Monday. You can now test out iOS 18 and many…

Apple’s public betas for iOS 18 are here to test out

One major dissenter threatens to upend Fisker’s apparent best chance at offloading its unsold EVs, a deal that would keep the startup’s bankruptcy proceeding alive and pave the way for…

Fisker has one major objector to its Ocean SUV fire sale

Payments giant Stripe has delayed going public for so long that its major investor Sequoia Capital is getting creative to offer returns to its limited partners. The venture firm emailed…

Major Stripe investor Sequoia confirms $70B valuation, offers its investors a payday

Alphabet, Google’s parent company, is in advanced talks to acquire Wiz for $23 billion, a person close to the company told TechCrunch. The deal discussions were previously reported by The…

Google’s Kurian approached Wiz, $23B deal could take a week to land, source says

Name That Bird determines individual members of a species by identifying distinguishing characteristics that most humans would be hard-pressed to spot.

Bird Buddy’s new AI feature lets people name and identify individual birds

YouTube Music is introducing two new ways to boost song discovery on its platform. YouTube announced on Monday that it’s experimenting with an AI-generated conversational radio feature, and rolling out…

YouTube Music is testing an AI-generated radio feature and adding a song recognition tool

Tesla had internally planned to build the dedicated robotaxi and the $25,000 car, often referred to as the Model 2, on the same platform.

Elon Musk confirms Tesla ‘robotaxi’ event delayed due to design change

What this means for the space industry is that theory has become reality: The possibility of designing a habitation within a lunar tunnel is a reasonable proposition.

Moon cave! Discovery could redirect lunar colony and startup plays

Get ready for a prime week of savings at TechCrunch Disrupt 2024 with the launch of Disrupt Deal Days! From now to July 19 at 11:59 p.m. PT, we’re going…

Disrupt Deal Days are here: Prime savings for TechCrunch Disrupt 2024!

Deezer is the latest music streaming app to introduce an AI playlist feature. The company announced on Monday that a select number of paid users will be able to create…

Deezer chases Spotify and Amazon Music with its own AI playlist generator

Real-time payments are becoming commonplace for individuals and businesses, but not yet for cross-border transactions. That’s what Caliza is hoping to change, starting with Latin America. Founded in 2021 by…

Caliza lands $8.5 million to bring real-time money transfers to Latin America using USDC

Adaptive is a platform that provides tools designed to simplify payments and accounting for general construction contractors.

Adaptive builds automation tools to speed up construction payments

When VanMoof declared bankruptcy last year, it left around 5,000 customers who had preordered e-bikes in the lurch. Now VanMoof is up and running under new management, and the company’s…

How VanMoof’s new owners plan to win over its old customers

Mitti Labs aims to transform rice farming in India and other South Asian markets by reducing methane emissions by 50% and water consumption by 30%.

Mitti Labs aims to make rice farming less harmful to the climate, starting in India

This is a guide on how to check whether someone compromised your online accounts.

How to tell if your online accounts have been hacked

There is a general consensus today that generative AI is going to transform business in a profound way, and companies and individuals who don’t get on board will be quickly…

The AI financial results paradox

Google’s parent company Alphabet might be on the verge of making its biggest acquisition ever. The Wall Street Journal reports that Alphabet is in advanced talks to acquire Wiz for…

Google reportedly in talks to acquire cloud security company Wiz for $23B

Featured Article

Hank Green reckons with the power — and the powerlessness — of the creator

Hank Green has had a while to think about how social media has changed us. He started making YouTube videos in 2007 with his brother, novelist John Green, at a time when the first iPhone was in development, Myspace was still relevant and Instagram didn’t exist. Seventeen years later, posting…

Hank Green reckons with the power — and the powerlessness — of the creator

Here is a timeline of Synapse’s troubles and the ongoing impact it is having on banking consumers. 

Synapse’s collapse has frozen nearly $160M from fintech users — here’s how it happened

Featured Article

Helixx wants to bring fast-food economics and Netflix pricing to EVs

When Helixx co-founder and CEO Steve Pegg looks at Daisy — the startup’s 3D-printed prototype delivery van — he sees a second chance. And he’s pulling inspiration from McDonald’s to get there.  The prototype, which made its global debut this week at the Goodwood Festival of Speed, is an interesting proof…

Helixx wants to bring fast-food economics and Netflix pricing to EVs

Featured Article

India clings to cheap feature phones as brands struggle to tap new smartphone buyers

India is struggling to get new smartphone buyers, as millions of Indians don’t go for an upgrade and continue to be on feature phones.

India clings to cheap feature phones as brands struggle to tap new smartphone buyers

Roboticists at The Faboratory at Yale University have developed a way for soft robots to replicate some of the more unsettling things that animals and insects can accomplish — say,…

Meet the soft robots that can amputate limbs and fuse with other robots

Featured Article

If you’re an AT&T customer, your data has likely been stolen

This week, AT&T confirmed it will begin notifying around 110 million AT&T customers about a data breach that allowed cybercriminals to steal the phone records of “nearly all” of its customers. The stolen data contains phone numbers and AT&T records of calls and text messages during a six-month period in…

If you’re an AT&T customer, your data has likely been stolen

In the first half of 2024 alone, more than $35.5 billion was invested into AI startups globally.

Here’s the full list of 28 US AI startups that have raised $100M or more in 2024