AI

Google’s new Gemini model can analyze an hour-long video — but few people can use it

Comment

illustration featuring Google's Bard logo
Image Credits: TechCrunch

Last October, a research paper published by a Google data scientist, the CTO of Databricks Matei Zaharia and UC Berkeley professor Pieter Abbeel posited a way to allow GenAI models — i.e. models along the lines of OpenAI’s GPT-4 and ChatGPT — to ingest far more data than was previously possible. In the study, the co-authors demonstrated that, by removing a major memory bottleneck for AI models, they could enable models to process millions of words as opposed to hundreds of thousands — the maximum of the most capable models at the time.

AI research moves fast, it seems.

Today, Google announced the release of Gemini 1.5 Pro, the newest member of its Gemini family of GenAI models. Designed to be a drop-in replacement for Gemini 1.0 Pro (which formerly went by “Gemini Pro 1.0” for reasons known only to Google’s labyrinthine marketing arm), Gemini 1.5 Pro is improved in a number of areas compared with its predecessor, perhaps most significantly in the amount of data that it can process.

Gemini 1.5 Pro can take in ~700,000 words, or ~30,000 lines of code — 35x the amount Gemini 1.0 Pro can handle. And — the model being multimodal — it’s not limited to text. Gemini 1.5 Pro can ingest up to 11 hours of audio or an hour of video in a variety of different languages.

Google Gemini 1.5 Pro
Image Credits: Google

To be clear, that’s an upper bound.

The version of Gemini 1.5 Pro available to most developers and customers starting today (in a limited preview) can only process ~100,000 words at once. Google’s characterizing the large-data-input Gemini 1.5 Pro as “experimental,” allowing only developers approved as part of a private preview to pilot it via the company’s GenAI dev tool AI Studio. Several customers using Google’s Vertex AI platform also have access to the large-data-input Gemini 1.5 Pro — but not all.

Still, VP of research at Google DeepMind Oriol Vinyals heralded it as an achievement.

“When you interact with [GenAI] models, the information you’re inputting and outputting becomes the context, and the longer and more complex your questions and interactions are, the longer the context the model needs to be able to deal with gets,” Vinyals said during a press briefing. “We’ve unlocked long context in a pretty massive way.”

Big context

A model’s context, or context window, refers to input data (e.g. text) that the model considers before generating output (e.g. additional text). A simple question — “Who won the 2020 U.S. presidential election?” — can serve as context, as can a movie script, email or e-book.

Models with small context windows tend to “forget” the content of even very recent conversations, leading them to veer off topic — often in problematic ways. This isn’t necessarily so with models with large contexts. As an added upside, large-context models can better grasp the narrative flow of data they take in and generate more contextually rich responses — hypothetically, at least.

There have been other attempts at — and experiments on — models with atypically large context windows.

AI startup Magic claimed last summer to have developed a large language model (LLM) with a 5 million-token context window. Two papers in the past year detail model architectures ostensibly capable of scaling to a million tokens — and beyond. (“Tokens” are subdivided bits of raw data, like the syllables “fan,” “tas” and “tic” in the word “fantastic.”) And recently, a group of scientists hailing from Meta, MIT and Carnegie Mellon developed a technique that they say removes the constraint on model context window size altogether.

But Google is the first to make a model with a context window of this size commercially available, beating the previous leader Anthropic’s 200,000-token context window — if a private preview counts as commercially available.

Google Gemini 1.5 Pro
Image Credits: Google

Gemini 1.5 Pro’s maximum context window is 1 million tokens, and the version of the model more widely available has a 128,000-token context window, the same as OpenAI’s GPT-4 Turbo.

So what can one accomplish with a 1 million-token context window? Lots of things, Google promises — like analyzing a whole code library, “reasoning across” lengthy documents like contracts, holding long conversations with a chatbot and analyzing and comparing content in videos.

During the briefing, Google showed two prerecorded demos of Gemini 1.5 Pro with the 1 million-token context window enabled.

In the first, the demonstrator asked Gemini 1.5 Pro to search the transcript of the Apollo 11 moon landing telecast — which comes to around 402 pages — for quotes containing jokes, and then to find a scene in the telecast that looked similar to a pencil sketch. In the second, the demonstrator told the model to search for scenes in “Sherlock Jr.,” the Buster Keaton film, going by descriptions and another sketch.

Google Gemini 1.5 Pro
Image Credits: Google

Gemini 1.5 Pro successfully completed all the tasks asked of it, but not particularly quickly. Each took between ~20 seconds and a minute to process — far longer than, say, the average ChatGPT query.

Google Gemini 1.5 Pro
Image Credits: Google

Vinyals says that the latency will improve as the model’s optimized. Already, the company’s testing a version of Gemini 1.5 Pro with a 10 million-token context window.

“The latency aspect [is something] we’re … working to optimize — this is still in an experimental stage, in a research stage,” he said. “So these issues I would say are present like with any other model.”

Me, I’m not so sure latency that poor will be attractive to many folks — much less paying customers. Having to wait minutes at a time to search across a video doesn’t sound pleasant — or very scalable in the near term. And I’m concerned how the latency manifests in other applications, like chatbot conversations and analyzing codebases. Vinyals didn’t say — which doesn’t instill much confidence.

My more optimistic colleague Frederic Lardinois pointed out that the overall time savings might just make the thumb twiddling worth it. But I think it’ll depend very much on the use case. For picking out a show’s plot points? Perhaps not. But for finding the right screengrab from a movie scene you only hazily recall? Maybe.

Other improvements

Beyond the expanded context window, Gemini 1.5 Pro brings other, quality-of-life upgrades to the table.

Google’s claiming that — in terms of quality — Gemini 1.5 Pro is “comparable” to the current version of Gemini Ultra, Google’s flagship GenAI model, thanks to a new architecture comprised of smaller, specialized “expert” models. Gemini 1.5 Pro essentially breaks down tasks into multiple subtasks and then delegates them to the appropriate expert models, deciding which task to delegate based on its own predictions.

MoE isn’t novel — it’s been around in some form for years. But its efficiency and flexibility has made it an increasingly popular choice among model vendors (see: the model powering Microsoft’s language translation services).

Now, “comparable quality” is a bit of a nebulous descriptor. Quality where it concerns GenAI models, especially multimodal ones, is hard to quantify — doubly so when the models are gated behind private previews that exclude the press. For what it’s worth, Google claims that Gemini 1.5 Pro performs at a “broadly similar level” compared to Ultra on the benchmarks the company uses to develop LLMs while outperforming Gemini 1.0 Pro on 87% of those benchmarks. (I’ll note that outperforming Gemini 1.0 Pro is a low bar.)

Pricing is a big question mark.

During the private preview, Gemini 1.5 Pro with the 1 million-token context window will be free to use, Google says. But the company plans to introduce pricing tiers in the near future that start at the standard 128,000 context window and scale up to 1 million tokens.

I have to imagine the larger context window won’t come cheap — and Google didn’t allay fears by opting not to reveal pricing during the briefing. If pricing’s in line with Anthropic’s, it could cost $8 per million prompt tokens and $24 per million generated tokens. But perhaps it’ll be lower; stranger things have happened! We’ll have to wait and see.

I wonder, too, about the implications for the rest of the models in the Gemini family, chiefly Gemini Ultra. Can we expect Ultra model upgrades roughly aligned with Pro upgrades? Or will there always be — as there is now — an awkward period where the available Pro models are superior performance-wise to the Ultra models, which Google’s still marketing as the top of the line in its Gemini portfolio?

Chalk it up to teething issues if you’re feeling charitable. If you’re not, call it like it is: darn confusing.

More TechCrunch

Threads, Meta’s alternative to Twitter, just celebrated its first birthday. After launching on July 5 last year, the social network has reached 175 million monthly active users — that’s a…

A year later, what Threads could learn from other social networks

J2 Ventures, a firm led mostly by the U.S. military veterans, announced on Thursday that it has raised a $150 million second fund. The Boston-based firm invests in startups whose…

J2 Ventures, focused on military healthcare, grabs $150M for its second fund

HealthEquity said in an 8-K filing with the SEC that it detected “anomalous behavior by a personal use device belonging to a business partner.”

HealthEquity says data breach is an ‘isolated incident’

Roll20 said that on June 29 it had detected that a “bad actor” gained access to an account on the company’s administrative website for one hour.

Roll20, an online tabletop role-playing game platform, discloses data breach

Fisker has a willing buyer for its remaining inventory of all-electric Ocean SUVs, and has asked the Delaware Bankruptcy Court judge overseeing its Chapter 11 case to approve the sale.…

Fisker asks bankruptcy court to sell its EVs at average of $14,000 each

Teddy Solomon just moved to a new house in Palo Alto, so he turned to the Stanford community on Fizz to furnish his room. “Every time I show up to…

Fizz, the anonymous Gen Z social app, adds a marketplace for college students

With increasing competition for what is, essentially, still a small number of hard tech and deep tech deals, Sidney Scott realized it would be a challenge for smaller funds like…

Why deep tech VC Driving Forces is shutting down

A guide to turn off reactions on your iPhone and Mac so you don’t get surprised by effects during work video calls.

How to turn off those silly video call reactions on iPhone and Mac

Amazon has decided to discontinue its Astro for Business device, a security robot for small- and medium-sized businesses, just seven months after launch.  In an email sent to customers and…

Amazon retires its Astro for Business security robot after only 7 months

Hiya, folks, and welcome to TechCrunch’s regular AI newsletter. This week in AI, the U.S. Supreme Court struck down “Chevron deference,” a 40-year-old ruling on federal agencies’ power that required…

This Week in AI: With Chevron’s demise, AI regulation seems dead in the water

Noplace had already gone viral ahead of its public launch because of its feature that allows users to express themselves by customizing the colors of their profile.

noplace, a mashup of Twitter and Myspace for Gen Z, hits No. 1 on the App Store

Cloudflare analyzed AI bot and crawler traffic to fine-tune automatic bot detection models.

Cloudflare launches a tool to combat AI bots

Twilio says “threat actors were able to identify” phone numbers of people who use the two-factor app Authy.

Twilio says hackers identified cell phone numbers of two-factor app Authy users

The news brings closure to more than two years of volleying back and forth between some of the biggest names in additive manufacturing.

Nano Dimension is buying Desktop Metal

Planning to attend TechCrunch Disrupt 2024 with your team? Maximize your team-building time and your company’s impact across the entire conference when you bring your team. Groups of 4 to…

Groups save big at TechCrunch Disrupt 2024

As more music streaming apps and creation tools emerge to compete for users’ attention, social music-sharing app Popster is getting two new features to grow its user base: an AI…

Music video-sharing app Popster uses generative AI and lets artists remix videos

Meta’s Threads now has more than 175 million monthly active users, Mark Zuckerberg announced on Wednesday. The announcement comes two days away from Threads’ first anniversary. Zuckerberg revealed back in…

Threads nears its one-year anniversary with more than 175M monthly active users

Cartken and its diminutive sidewalk delivery robots first rolled into the world with a narrow charter: carrying everything from burritos and bento boxes to pizza and pad thai that last…

From burritos to biotech: How robotics startup Cartken found its AV niche

Ashwin Nandakumar and Ashwin Jainarayanan were working on their doctorates at adjacent departments in Oxford, but they didn’t know each other. Nandakumar, who was studying oncology, one day stumbled across…

Granza Bio grabs $7M seed from Felicis and YC to advance delivery of cancer treatments

LG has acquired an 80% stake in Athom, a Dutch smart home company and maker of the Homey smart home hub. According to LG’s announcement, it will purchase the remaining…

LG acquires smart home platform Athom to bring third-party connectivity to its ThinQ ecosytem

CoinDCX, India’s leading cryptocurrency exchange, is expanding internationally through the acquisition of BitOasis, a digital asset platform in the Middle East and North Africa, the companies said Wednesday. The Bengaluru-based…

CoinDCX acquires BitOasis in international expansion push

Collaborative document features are being made available inside Proton Drive, further extending the company’s trademark pitch of robust security.

In a major update, Proton adds privacy-safe document collaboration to Drive, its freemium E2EE cloud storage service

Telegram launched a digital currency called Stars for in-app use last month. Now, the company is expanding its use cases to paid content. The chat app is also allowing channels…

Telegram lets creators share paid content to channels

For the past couple of years, innovation has been accelerating in new materials development. And a new French startup called Altrove plans to play a role in this innovation cycle.…

Altrove uses AI models and lab automation to create new materials

The Indian social media platform Koo, which positioned itself as a competitor to Elon Musk’s X, is ceasing operations after its last-resort acquisition talks with Dailyhunt collapsed. Despite securing over…

Indian social network Koo is shutting down as buyout talks collapse

Apiday leverages AI to save time for its customers. But like legacy consultants, it also offers human expertise.

Europe is still serious about ESG, and Apiday is helping companies comply

Google totally dodges the question of how much energy is AI is using — perhaps because the answer is “way more than we’d care to say.”

Google’s environmental report pointedly avoids AI’s actual energy cost

SpaceX’s ambitious plans to launch its Starship mega-rocket up to 44 times per year from NASA’s Kennedy Space Center are causing a stir among some of its competitors. Late last…

SpaceX wants to launch up to 120 times a year from Florida — and competitors aren’t happy about it

The situation around a data breach that’s affected an ever-growing number of fintech companies has gotten even weirder. Evolve Bank & Trust announced last week that it was hacked and…

Newsletter writer covering Evolve Bank’s data breach says the bank sent him a cease and desist letter

The new bylines go beyond the typical @username references that often accompany link posts from news publications and those pointing to other written content, like a WordPress blog or Substack

Twitter/X alternative Mastodon appeals to journalists with new ‘byline’ feature