Featured Article

Why Apple is taking a small-model approach to generative AI

Apple Intelligence is more bespoke than larger models, with a focus on user experience

Comment

Apple Software Engineering SVP Craig Federighi, seen presenting Apple Intelligence at WWDC 2024
Image Credits: Apple

Among the biggest questions surrounding models like ChatGPT, Gemini and Midjourney since launch is what role (if any) they’ll play in our daily lives. It’s something Apple is striving to answer with its own take on the category, Apple Intelligence, which was officially unveiled this week at WWDC 2024.

The company led with flash at Monday’s presentation; that’s just how keynotes work. When SVP Craig Federighi wasn’t skydiving or performing parkour with the aid of some Hollywood (well, Cupertino) magic, Apple was determined to demonstrate that its in-house models were every bit as capable as the competition’s.

The jury is still out on that question, with the betas having only dropped Monday, but the company has since revealed some of what makes its approach to generative AI different. First and foremost is scope. Many of the most prominent companies in the space take a “bigger is better” approach to their models. The goal of these systems is to serve as a kind of one-stop shop to the world’s information.

Apple’s approach to the category, on the other hand, is grounded in something more pragmatic. Apple Intelligence is a more bespoke approach to generative AI, built specifically with the company’s different operating systems at their foundation. It’s a very Apple approach in the sense that it prioritizes a frictionless user experience above all.

Apple Intelligence is a branding exercise in one sense, but in another, the company prefers the generative AI aspects to seamlessly blend into the operating system. It’s completely fine — or even preferred, really — if the user has no concept of the underlying technologies that power these systems. That’s how Apple products have always worked.

Keeping the models small

The key to much of this is creating smaller models: training the systems on a customized dataset designed specifically for the kinds of functionality required by users of its operating systems. It’s not immediately clear how much the size of these models will affect the black box issue, but Apple thinks that, at the very least, having more topic-specific models will increase the transparency around why the system makes specific decisions.

Due to the relatively limited nature of these models, Apple doesn’t expect that there will be a huge amount of variety when prompting the system to, say, summarize text. Ultimately, however, the variation from prompt to prompt depends on the length of the text being summarized. The operating systems also feature a feedback mechanism into which users can report issues with the generative AI system.

While Apple Intelligence is much more focused than larger models, it can cover a spectrum of requests, thanks to the inclusion of “adapters,” which are specialized for different tasks and styles. Broadly, however, Apple’s is not a “bigger is better” approach to creating models, as things like size, speed and compute power need to be taken into account — particularly when dealing with on-device models.

ChatGPT, Gemini and the rest

Opening up to third-party models like OpenAI’s ChatGPT makes sense when considering the limited focus of Apple’s models. The company trained its systems specifically for the macOS/iOS experience, so there’s going to be plenty of information that is out of its scope. In cases where the system thinks a third-party application would be better suited to provide a response, a system prompt will ask whether you want to share that information externally. If you don’t receive a prompt like this, the request is being processed with Apple’s in-house models.

This should function the same with all external models Apple partners with, including Google Gemini. It’s one of the rare instances where the system will draw attention to its use of generative AI in this way. The decision was made, in part, to squash any privacy concerns. Every company has different standards when it comes to collecting and training on user data.

Requiring users to opt-in each time removes some of the onus from Apple, even if it does add some friction into the process. You can also opt-out of using third-party platforms systemwide, though doing so would limit the amount of data the operating system/Siri can access. You cannot, however, opt-out of Apple Intelligence in one fell swoop. Instead, you will have to do so on a feature by feature basis.

Private Cloud Compute

Whether the system processes a specific query on device or via a remote server with Private Cloud Compute, on the other hand, will not be made clear. Apple’s philosophy is that such disclosures aren’t necessary, since it holds its servers to the same privacy standards as its devices, down to the first-party silicon they run on.

One way to know for certain whether the query is being managed on- or off-device is to disconnect your machine from the internet. If the problem requires cloud computing to solve, but the machine can’t find a network, it will throw up an error noting that it cannot complete the requested action.

Apple is breaking down the specifics surrounding which actions will require cloud-based processing. There are several factors at play there, and the ever-changing nature of these systems means something that could require cloud compute today might be able to be accomplished on-device tomorrow. On-device computing won’t always be the faster option, as speed is one of the parameters Apple Intelligence factors in when determining where to process the prompt.

There are, however, certain operations that will always be performed on-device. The most notable of the bunch is Image Playground, as the full diffusion model is stored locally. Apple tweaked the model so it generates images in three different house styles: animation, illustration and sketch. The animation style looks a good bit like the house style of another Steve Jobs-founded company. Similarly, text generation is currently available in a trio of styles: friendly, professional and concise.

Even at this early beta stage, Image Playground’s generation is impressively quick, often only taking a couple of seconds. As for the question of inclusion when generating images of people, the system requires you to input specifics, rather than simply guessing at things like ethnicity.

How Apple will handle datasets

Apple’s models are trained on a combination of licensed datasets and by crawling publicly accessible information. The latter is accomplished with AppleBot. The company’s web crawler has been around for some time now, providing contextual data to applications like Spotlight, Siri and Safari. The crawler has an existing opt-out feature for publishers.

“With Applebot-Extended,” Apple notes, “web publishers can choose to opt out of their website content being used to train Apple’s foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools.”

This is accomplished with the inclusion of a prompt within the website’s code. With the advent of Apple Intelligence, the company has introduced a second prompt, which allows sites to be included in search results but excluded for generative AI model training.

Responsible AI

Apple released a whitepaper on the first day of WWDC titled, “Introducing Apple’s On-Device and Server Foundation Models.” Among other things, it highlights principles governing the company’s AI models. In particular, Apple highlights four things:

  1. “Empower users with intelligent tools: We identify areas where AI can be used responsibly to create tools for addressing specific user needs. We respect how our users choose to use these tools to accomplish their goals.”
  2. “Represent our users: We build deeply personal products with the goal of representing users around the globe authentically. We work continuously to avoid perpetuating stereotypes and systemic biases across our AI tools and models.”
  3. “Design with care: We take precautions at every stage of our process, including design, model training, feature development, and quality evaluation to identify how our AI tools may be misused or lead to potential harm. We will continuously and proactively improve our AI tools with the help of user feedback.”
  4. “Protect privacy: We protect our users’ privacy with powerful on-device processing and groundbreaking infrastructure like Private Cloud Compute. We do not use our users’ private personal data or user interactions when training our foundation models.”

Apple’s bespoke approach to foundational models allows the system to be tailored specifically to the user experience. The company has applied this UX-first approach since the arrival of the first Mac. Providing as frictionless an experience as possible serves the user, but it should not be done at the expense of privacy.

This is going to be a difficult balancing act the company will have to navigate as the current crop of OS betas reach general availability this year. The ideal approach is to offer up as much — or little — information as the end user requires. Certainly there will be plenty of people who don’t care, say, whether or not a query is executed on-machine or in the cloud. They’re content to have the system default to whatever is the most accurate and efficient.

For privacy advocates and others who are interested in those specifics, Apple should strive for as much user transparency as possible — not to mention transparency for publishers that might prefer not to have their content sourced to train these models. There are certain aspects with which the black box problem is currently unavoidable, but in cases where transparency can be offered, it should be made available upon users’ request.

More TechCrunch

It sounds like the latest dispute between Apple and Fortnite-maker Epic Games isn’t over. Epic has been fighting Apple for years over the company’s revenue-sharing requirements in the App Store.…

Epic Games CEO promises to ‘fight’ Apple over ‘absurd’ changes

As deep-pocketed companies like Amazon, Google and Walmart invest in and experiment with drone delivery, a phenomenon reflective of this modern era has emerged. Drones, carrying snacks and other sundries,…

What happens if you shoot down a delivery drone?

A police officer pulled over a self-driving Waymo vehicle in Phoenix after it ran a red light and pulled into a lane of oncoming traffic, according to dispatch records. The…

Waymo robotaxi pulled over by Phoenix police after driving into the wrong lane

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. This week, Figma CEO Dylan…

Figma pauses its new AI feature after Apple controversy

We’ve created this guide to help parents navigate the controls offered by popular social media companies.

How to set up parental controls on Facebook, Snapchat, TikTok and more popular sites

Featured Article

You could learn a lot from a CIO with a $17B IT budget

Lori Beer’s work is a case study for every CIO out there, most of whom will never come close to JP Morgan Chase’s scale, but who can still learn from how it goes about its business.

22 hours ago
You could learn a lot from a CIO with a $17B IT budget

For the first time, Chinese government workers will be able to purchase Tesla’s Model Y for official use. Specifically, officials in eastern China’s Jiangsu province included the Model Y in…

Tesla makes it onto Chinese government purchase list

Generative AI models don’t process text the same way humans do. Understanding their “token”-based internal environments may help explain some of their strange behaviors — and stubborn limitations. Most models,…

Tokens are a big reason today’s generative AI falls short

After multiple rejections, Apple has approved Fortnite maker Epic Games’ third-party app marketplace for launch in the EU. As now permitted by the EU’s Digital Markets Act (DMA), Epic announced…

Apple approves Epic Games’ marketplace app after initial rejections

There’s no need to worry that your secret ChatGPT conversations were obtained in a recently reported breach of OpenAI’s systems. The hack itself, while troubling, appears to have been superficial…

OpenAI breach is a reminder that AI companies are treasure troves for hackers

Welcome to Startups Weekly — TechCrunch’s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Most…

Space for newcomers, biotech going mainstream, and more

Elon Musk’s X is exploring more ways to integrate xAI’s Grok into the social networking app. According to a series of recent discoveries, X is developing new features like the…

X plans to more deeply integrate Grok’s AI, app researcher finds

We’re about four months away from TechCrunch Disrupt 2024, taking place October 28 to 30 in San Francisco! We could not bring you this world-class event without our world-class partners…

Meet Brex, Google Cloud, Aerospace and more at Disrupt 2024

In its latest step targeting a major marketplace, the European Commission sent Amazon another request for information (RFI) Friday in relation to its compliance under the bloc’s rulebook for digital…

Amazon faces more EU scrutiny over recommender algorithms and ads transparency

Quantum Rise, a Chicago-based startup that does AI-driven automation for companies like dunnhumby (a retail analytics platform for the grocery industry), has raised a $15 million seed round from Erie…

Quantum Rise grabs $15M seed for its AI-driven ‘Consulting 2.0’ startup

On July 4, YouTube released an updated eraser tool for creators so they can easily remove any copyrighted music from their videos without affecting any other audio such as dialog…

YouTube’s updated eraser tool removes copyrighted music without impacting other audio

Airtel, India’s second-largest telecom operator, on Friday denied any breach of its systems following reports of an alleged security lapse that has caused concern among its customers. The telecom group,…

India’s Airtel dismisses data breach reports amid customer concerns

According to a recent Dealroom report on the Spanish tech ecosystem, the combined enterprise value of Spanish startups surpassed €100 billion in 2023. In the latest confirmation of this upward trend, Madrid-based…

Spain’s exposure to climate change helps Madrid-based VC Seaya close €300M climate tech fund

Forestay, an emerging VC based out of Geneva, Switzerland, has been busy. This week it closed its second fund, Forestay Capital II, at a hard cap of $220 million. The…

Forestay, Europe’s newest $220M growth-stage VC fund, will focus on AI

Threads, Meta’s alternative to Twitter, just celebrated its first birthday. After launching on July 5 last year, the social network has reached 175 million monthly active users — that’s a…

A year later, what Threads could learn from other social networks

J2 Ventures, a firm led mostly by U.S. military veterans, announced on Thursday that it has raised a $150 million second fund. The Boston-based firm invests in startups whose products…

J2 Ventures, focused on military healthcare, grabs $150M for its second fund

HealthEquity said in an 8-K filing with the SEC that it detected “anomalous behavior by a personal use device belonging to a business partner.”

HealthEquity says data breach is an ‘isolated incident’

Roll20 said that on June 29 it had detected that a “bad actor” gained access to an account on the company’s administrative website for one hour.

Roll20, an online tabletop role-playing game platform, discloses data breach

Fisker has a willing buyer for its remaining inventory of all-electric Ocean SUVs, and has asked the Delaware Bankruptcy Court judge overseeing its Chapter 11 case to approve the sale.…

Fisker asks bankruptcy court to sell its EVs at average of $14,000 each

Teddy Solomon just moved to a new house in Palo Alto, so he turned to the Stanford community on Fizz to furnish his room. “Every time I show up to…

Fizz, the anonymous Gen Z social app, adds a marketplace for college students

With increasing competition for what is, essentially, still a small number of hard tech and deep tech deals, Sidney Scott realized it would be a challenge for smaller funds like…

Why deep tech VC Driving Forces is shutting down

A guide to turn off reactions on your iPhone and Mac so you don’t get surprised by effects during work video calls.

How to turn off those silly video call reactions on iPhone and Mac

Amazon has decided to discontinue its Astro for Business device, a security robot for small- and medium-sized businesses, just seven months after launch.  In an email sent to customers and…

Amazon retires its Astro for Business security robot after only 7 months

Hiya, folks, and welcome to TechCrunch’s regular AI newsletter. This week in AI, the U.S. Supreme Court struck down “Chevron deference,” a 40-year-old ruling on federal agencies’ power that required…

This Week in AI: With Chevron’s demise, AI regulation seems dead in the water

Noplace had already gone viral ahead of its public launch because of its feature that allows users to express themselves by customizing the colors of their profile.

noplace, a mashup of Twitter and Myspace for Gen Z, hits No. 1 on the App Store