Skip to main content

Instagram’s ‘Made with AI’ label swapped out for ‘AI info’ after photographers’ complaints

Instagram’s ‘Made with AI’ label swapped out for ‘AI info’ after photographers’ complaints

/

The AI label from Meta angered photographers after it tagged real-life pictures that had been retouched in editing tools like Photoshop.

Share this story

Screenshot of Instagram’s mobile app displaying a picture with the “AI Info” tag applied to it.
Image: Meta

On Monday, Meta announced that it is “updating the ‘Made with AI’ label to ‘AI info’ across our apps, which people can click for more information,” after people complained that their pictures had the tag applied incorrectly. Former White House photographer Pete Souza pointed out the tag popping up on an upload of a photo originally taken on film during a basketball game 40 years ago, speculating that using Adobe’s cropping tool and flattening images might have triggered it.

“As we’ve said from the beginning, we’re consistently improving our AI products, and we are working closely with our industry partners on our approach to AI labeling,” said Meta spokesperson Kate McLaughlin. The new label is supposed to more accurately represent that the content may simply be modified rather than making it seem like it is entirely AI-generated.

The problem seems to be the metadata tools like Adobe Photoshop apply to images and how platforms interpret that. After Meta expanded its policies around labeling AI content, real-life pictures posted to platforms like Instagram, Facebook, and Threads were tagged “Made with AI.”

However, Adobe points the finger at Meta and its decisions about how to present that metadata. “We know millions of users use AI today to perform the same aesthetic improvements to content as they did before AI. That’s why when it comes to labeling AI, we believe platforms labeling content as being made with or generated by AI should only do so when an image is wholly AI generated,” said Andy Parsons, Adobe senior director of the Content Authenticity Initiative (CAI), in a statement emailed to The Verge.

Screenshot of Facebook’s mobile app with a picture of a cat that has the “AI Info” tag applied to it.
Image: Meta

You may see the new labeling first on mobile apps and then the web view later, as McLaughlin tells The Verge it is starting to roll out across all surfaces.

Once you click the tag, it will still show the same message as the old label, which has a more detailed explanation of why it might have been applied and that it could cover images fully generated by AI or edited with tools that include AI tech, like Generative Fill. Metadata tagging tech like C2PA was supposed to make telling the difference between AI-generated and real images simpler and easier, but that future isn’t here yet.

Andy Parsons, Adobe:

Content Credentials are an open technical standard designed to provide important information like a “nutrition label” for digital content such as the creator’s name, the date an image was created, what tools were used and any edits that were made, including if generative AI was used. At Adobe we are excited by the promise and potential of the integration of AI into creative workflows to transform how people imagine, ideate, and create. In a world where anything digital can be edited, we recognize how important it is for Content Credentials to carry the context to make clear how content was created and edited, including if the content was wholly generated by a generative AI model. The Content Credentials standard was designed from the ground up to clearly express this context.

Through our role leading the Content Authenticity Initiative (CAI) and co-founding the Coalition for Content Provenance and Authenticity, we understand how to best express how content has been edited is an evolving process. We know millions of users use AI today to perform the same aesthetic improvements to content as they did before AI. That’s why when it comes to labeling AI, we believe platforms labeling content as being made with or generated by AI should only do so when an image is wholly AI generated. That way, people will easily understand that the content they are viewing is entirely fake. If generative AI is only used in the editing process, the full context of Content Credentials should be viewable to provide deeper context into the authenticity, edits or underlying facts the creator may want to communicate.

Update, July 2nd: Added statement from Adobe.