Skip to main content

How Adobe is managing the AI copyright dilemma, with general counsel Dana Rao

Adobe’s top lawyer discusses the future of copyright, why the Figma acquisition fell through, and why he’s optimistic AI won’t put creatives out of work.

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

A portrait of Adobe General Counsel Dana Rao.
Photo illustration: The Verge | Photo by Chip Somodevilla/Getty Images

Today, I’m talking to Dana Rao, who is general counsel and chief trust officer at Adobe. That’s the company that makes Photoshop, Illustrator, and other key creative software.

Now, if you’re a longtime Decoder listener, you know that I have always been fascinated with Adobe, which I think the tech press largely undercovers. After all, this is a company that makes some of the most important tools that exist across design, video, photography, and more. And Adobe’s customers have passionate feelings about both the company and those tools.

If you’re interested in how creativity happens, you’re kind of necessarily interested in what Adobe’s up to. I bring all that up because it is fascinating to consider how Dana’s job as Adobe’s top lawyer is really at the center of the company’s future. That’s for two reasons. First, more philosophically, the copyright issues with generative AI are so unknown and unfolding so fast that they will necessarily shape what kind of products Adobe can even make in the future and what people can make with those products.

Second, a little more practically, the company just tried and failed to buy the popular upstart design company Figma, a potentially $20 billion deal that was shut down over antitrust concerns in the European Union. So Dana and I had a lot to talk about. 

As it happens, we spoke just one day after Adobe and Figma announced the end of the deal, and he really opened up on how the decision to call things off was made, why Adobe wanted to acquire Figma in the first place, and how the deal falling apart really influenced his thinking on industry consolidation in the future. Then, we got into the weeds on AI and copyright, a story that I think is going to unfold in unpredictable ways for at least the next year, if not more.

Like every company, Adobe is figuring out what the boundaries of copyright law and fair use look like in the age of AI, just like the creatives that rely on its products. But at the same time, it’s also making huge investments in and shipping generative AI tools like the Firefly image generator inside of huge mainstream software products like Photoshop and Illustrator.

I talked to Dana about how Adobe is walking that tightrope and how he’s thinking about the general relationship between AI and copyright as Adobe trains its models and ships the tools to its users. You’ll hear Dana frame this in a way that’s easy to understand: what data can AI companies train on, and then what are we allowed to do with the output of the AI systems?

That’s a simple question, but it contains a lot of gray areas. For example, what does it mean to copy an artist’s style using AI, something that no law on the books really protects? Adobe is pretty invested in that idea, and Dana helped the company draft an anti-impersonation bill it presented to Congress.

Then, of course, there’s the issue of AI being used to deceive people, especially during this current election year. That’s an interesting problem for the companies that make AI tools. Should they restrict what their users can do with them? Adobe is at the center of that debate with something it calls the Content Authenticity Initiative, and I am proud to say Decoder is the sort of podcast where a viral deepfake of the pope dripped out in a puffer jacket is described as a catalyzing event in tech policy.

Dana has a lot of ideas on how metadata can help people know what they’re looking at is actually real. One note before we begin, like I said, Dana and I spoke one day after Adobe and Figma called off their deal, which means we spoke before The New York Times announced it was suing Microsoft and OpenAI for copyright infringement.

But the legal quagmire the AI industry finds itself in is much bigger than just one lawsuit, and the issues are really the same across all of the pending copyright lawsuits. It really feels like the entire AI industry is just one bad copyright outcome away from an existential crisis, and it was really interesting to talk to Dana about all of that.

Okay, Dana Rao, Adobe’s general counsel and chief trust officer.

Dana Rao, you’re the general counsel and chief trust officer at Adobe. Welcome to Decoder.

Thank you very much, excited to be here.

There is a lot to talk about with the general counsel of Adobe today. There’s all of the AI and copyright conversation that I love having. And then, just as we were getting ready to speak this week, Adobe and Figma called off the deal to combine companies because of regulatory pressure, particularly from the EU. So I do want to talk about that, but it’s Decoder, so we’ve got to start with the Decoder stuff. People know Adobe really well. Almost anybody working with computers in any way is aware of Adobe. What does the general counsel of Adobe do all day?

Well, we help the company navigate where the world is going, which is really, it’s the coolest job because Adobe is right there, as you mentioned, 40-year-old company, been there at every step of the digital revolution, whether it’s desktop publishing or web video with Flash or digital marketing with when we bought Omniture, Acrobat and electronic documents, image editing with Photoshop. It’s been amazing. And then, last March, we launched our foundation model, Adobe Firefly, which helps people do text image generation. So every step along the way, we’re breaking new ground with new innovation, new technology.

And on the legal side, that means you’re thinking about problems that no one has thought about before because the technology is doing things that no one’s ever done before. It’s great. So I love that part of my job. I actually have the second title of being a chief trust officer at Adobe. And we took that on a couple of years ago when I took on the cybersecurity organization. So I have the cybersecurity engineering organization in addition to legal and public policy. And then, together, we think about us as helping Adobe establish that value of trust in a digital world with that relationship with customers.

It’s an intangible world we live in, and the currency you have is the trust that people place in you, and that’s how you build your products, and it’s how you comply with the law — do you understand the law, can you shape the law — which is always a fun part of the job. And so building all that together into that trust value is one of the reasons we’ve moved on and called that a trust organization, which is also very cool.

Let me ask you a very reductive question about that. Cybersecurity, cloud services, that comes along with Adobe really moving into being a cloud company, right? The Adobe of 10, 15 years ago sells software that people run locally on PCs and Macs. The Adobe of today sells software that you can access in real, meaningful ways on the cloud. At the Code Conference, we demoed Photoshop on the web for people for the first time. That’s a big change, right, that Adobe now lives in the cloud? Has that meaningfully changed your view of the cybersecurity organization, the trust organization?

Yeah, absolutely. I mean, we had, as you mentioned, the desktop era. And there were a lot of, as you may remember, issues with Flash on the security side. 

Hey, you brought it up, not me.

Well, there were. I mean, that’s hardly a secret. All I was saying was even in the desktop world, or as we were transitioning to the web, security was always paramount, and understanding how to minimize security issues. Acrobat is a great example of a place where we dedicated security resources to making sure there’s a sandbox essentially around Acrobat. So even if someone finds an exploitable vulnerability, there’s really nothing they can do with it because of the sandbox.

That’s the way we’ve architected Acrobat, and that’s really had a dramatic decrease in the ability of people to exploit things. So that’s been great, but as you say, we moved from that to now almost an entirely cloud-based services model, and we also use a lot more public cloud services. So we have to think a lot about the public cloud configuration and how we work with Amazon, how we work with Azure, and set up those services to ensure that the data we’re providing is the right data and can’t be exploited. So that’s important. 

And then within our own networks, because we have employee data, we have customer data, spending a lot more time there on endpoint detection and response and other technologies that allow us to see what’s going on in the network, see if something’s happening, and then stop it before it spreads. As you know, in the cybersecurity world, there’s no such thing as being immune to an attack. All you can do is the best you can in terms of reasonable security measures to understand what’s going on and stop the spread before it happens.

The two parts of your role seem very, very connected to me in a way that they’re not connected at other companies, but Adobe has a lot of customers, it puts a lot of data in a lot of places. You need to sign contracts with your customers and your vendors to say, “Here’s how the data is going to be used.” And then you need to do the work to actually protect the data.

In other places, those structures are different and sometimes in tension, right? The things people want to sell, the contracts they want to sign, are at the bleeding edge, and then the security organization might say no. In your organization, they’re obviously together. They’re under you. Do you feel that tension? How do you resolve it?

Yeah, we still have the tension in the sense that the security engineers have a certain perspective on how to develop and protect our products. And then, as you note, the sales team and the commercial lawyers are out there in the day to day trying to make sure we can sell our products and generate revenue. So there’s always that tension. What’s nice about it being under me is I’m an escalation point. 

I was an electrical engineer undergrad, so I get to speak the lingo with the engineers. And obviously, I’m a lawyer, so I get to speak the lingo with my legal team, and I can bring some harmony together. But we have a fantastic chief security officer who is very practical and business-minded, and he understands the balance between what we can do and what we need to do in a very positive way, a business-focused way. So I feel good about the balance that gets struck even before it comes to me. 

Adobe is a company with integrity. We have 30,000 people, and our salespeople have just as much integrity as the engineers, so they’re not typically the ones who are overselling. They have a long-term value relationship with our customers, and they don’t want to sell them something that we can’t actually deliver on. I made a joke about the escalation point. Very rarely does anything actually have to come all the way up to me to get resolved.

We’ll come to that. You’re foreshadowing the ultimate Decoder question about decisions. 30,000 people at Adobe. How big is the legal team?

My whole org is 700. So it’s about 50-50 between legal / public policy and then security. So security is about 350. And then legal is probably about 325, depending on the day.

I would not have expected Adobe in years past to need a giant IP group in its legal department. Obviously, you need to protect your software; you need to go get patents. But all of this is now happening in a world of generative AI. It feels like having a big IP practice, a copyright practice, within your company, is going to become more and more important. Do you see something like that growing inside your division?

Yeah, I think that’s a good assessment that, historically, intellectual property, it was always important to us in the sense that we wanted to protect our innovations. We had a record number of patents filed and issued last year, which is great. But we’ve never been a company that has tried to sue people on our intellectual property just because we think they’re copying us.

We’ve always believed that we’re going to win in the marketplace if we innovate. And that’s going to be the number one way we respond to competition is innovation. If somebody steals a trade secret or an employee leaks the trade secret, it’s very nice to have the copyrights. It’s very nice to have the patents to stop that person from stealing our intellectual property. And that’s typically how we think about intellectual property is stopping misappropriation, as opposed to just out-there infringement. It hasn’t been something that Adobe’s ever had to focus on.

In the age of generative AI, it’s been funny because we’ve had a copyright team, of course. It’s been small, and in February, we were really thinking about fundamental copyright questions with generative AI, and I suddenly said, “You know what? Copyright is a sexy legal career now.” I haven’t been able to say that for a while, but it’s, yeah, it’s—

I lasted two years, and I quit. I was like, no one’s ever going to care about this again. And here we are.

And here we are. These are so sought after, and we have one, J. Scott Evans is our director of copyright, so we’re lucky to have him because you really need somebody who understands technology and copyright, and that’s rarer to find in the copyright field than in other legal fields because the copyright typically has been focused on entertainment and maybe the really soft IP areas, so finding someone who can understand how generative AI works and also is an expert in copyright, they’re very rare and worth their weight in gold at this point.

So how is that team structured? The video team, 350 people. How is it organized?

Pretty classic, I would say. We have an employment legal team. We have a privacy team. We have a corporate team that does securities and M&A. We have a sales legal team. We have an international legal team. We have an intellectual property and litigation team, so that’s combined into one team. They do both the IP and the litigation, and then we have a legal operations team. So we have a pretty clean, I’m sure I’m going to forget somebody… compliance. We have a chief compliance officer, of course, so important. 

Somewhere in the EU, someone’s ears perked up.

Of course, Cheryl House. She’s amazing. She’s been at Adobe 20 years. Let me name-check her. So I would say classic. Every company has all of those functions. Sometimes people consolidate them under one person or two people, but I have those mostly just reporting direct to me. I find that better for me as a style.

Legal and cybersecurity, they’re the two functions of a company that can say no — that maybe the most often, they can say no. “You can’t do that. That’s illegal. You can’t do that. That’ll create a security risk. You can’t do that, Margrethe Vestager will call me up and yell at me.” How do you manage that? Do you report directly to the CEO? Do you get to sit in on all the decisions? Where do you enter the chain on both of those roles? 

Well, first, I want to make sure I can show you my mug, which I drink from every day. Can you see this? Is it backward? So we have a slogan… When I became general counsel, one of the things I wanted to make sure is that my legal team was seen as someone who is there to help the business find answers to problems and is supposed to be the so-called “Department of No,” which is how a lot of people view legal. So we created a slogan, it’s called, “Yes starts here,” which is our slogan in legal. And then there’s an asterisk, and it says, “Because we’re a legal team, accept when it’s a definite no.” And that’s the balance, right? We’re here to help the company find ways to deliver great products, deliver value, make money. That’s why we’re here on the Adobe team. We’re here to make that happen. If the way you want to do it isn’t the right way under the law, we’re not going to say you can’t. We’re going to say, “Here’s another way. What’s another way to do it?” And that’s our job as in-house legal: find another way to help them achieve that goal. The client doesn’t understand the law, so they’re not going to be able to brainstorm, but you can, and understand that.

However, there are times where you just say no, and that is also our job. And we’re the only people in the company who have that sort of final no, where we can just say, “It goes against the spirit of the law. It goes against the letter of the law. We can’t do this.” And that is definitely our job. I’m proud to be part of our legal team because they’ve really embraced that business focus, and I’d say that the business team really enjoys partnering with us, or at least I think they do.

Alright, you have to give me an example now. What’s something that you said no to?

Without any details, of course, because I said no to this. So I also manage the anti-piracy team, the fraud team that’s in there, the engineering team that helps us address piracy and fraud. We talk a lot about the way we can reach out to customers to allow them to know that they may be using pirated software because there’s a lot of pirated Adobe software out there. A lot of people get it on eBay and install it, and it’s unfortunate because that typically is old software. It’s not up to date from a security perspective. It’s virus-ridden, but people are saying, “Oh, it’s cheap, I’m installing it.” So we want to be able to notify people and say, “Hey, you may be using a pirated software. Why don’t you go log in to Adobe.com and get the Photoshop plan for $10 a month, and you’ll have the latest, greatest technology, and you won’t have any viruses.”

So you’re going to be able to do that. But you have to be really thoughtful about the law everywhere in the world because that kind of communication, direct to the customer, in some places, can be prohibited because they don’t want you communicating, you don’t have that direct relationship. So we had to spend a lot of time. I think the business had the right spirit, right? Let’s go out there and communicate with people. They may be using pirated software, they may not even know, and they’re exposing themselves to these security vulnerabilities. 

Even though they had the right spirit, there are some really clear restrictions about how you can go out and communicate with people who you may not have a direct contractual relationship with. And so we had to say no to some of the ways they wanted to do that. So that is a clean, generic version of that. 

Yeah, I’m like, what were the ways? Like, what kind of worms were you installing on people’s computers to find Adobe software?

No worms, no worms. Everybody was trying to do the right thing. But to your earlier question, Shantanu Narayen is our CEO, and so I always have the ability to talk to him about, “Hey, we need to go sit in with the business and help them understand [that] maybe the thing they’re doing isn’t the thing that we should be doing or maybe there’s another way.” He’s always open to that. I would say that my job is very easy because he has the most integrity of almost any business person I’ve ever worked with, which is great because it radiates out throughout the organization. So they know if things get escalated, Shantanu is always going to be on the side of, “We’re going to do the right thing.” And so very few people even bother to escalate things up to Shantanu because they already know what the answer is going to be. And that makes my life a lot easier. It makes my legal team’s life a lot easier being able to partner with somebody who cares so much about integrity.

Let me ask about that in a broader way. You were an engineer. You were a lawyer. Now you’re a tech executive. You oversee the cybersecurity organization. You were at Microsoft, you were at Adobe. The industry has changed a lot, right? And the classic view of the tech industry, the classic view toward the law, has been, “We’re going to push the boundaries, and we’re going to ask for forgiveness.” I’ll use one example that is very familiar to everyone: Google Books. We’re going to scan all the books. We’re going to get in trouble, and then we’re going to come to some settlement, and then everyone’s happy because Google Books exists.

It worked. The plan worked. YouTube. YouTube worked in that exact way. We’re going to push the boundary of copyright law. Now that there’s YouTube, everyone’s happy and everyone’s making money.

That is different now. It feels different, right? Like that move, especially from larger tech companies, maybe not going so well, maybe not as condoned, maybe less exciting even in a way. Has that changed how you think about these decisions and how you think about evaluating the risks you can take?

I think that Adobe has a certain set of values as a company that are independent of the moment. And that’s one of the benefits of having been around for 40 years and seeing all the changes you talk about. And we have, we had, two amazing founders: Chuck Geschke and John Warnock. John just passed away a couple of months ago, but both legends in the Valley, but both had this — I had the opportunity to know both of them — just really strong ethics. When we looked at Firefly or generative models and said, “How are we going to train it? We could train it off the web. We could scrape the web and build that model, or we could try to be more thoughtful about how we train that model given the potential copyright issues and the fact that we have creative customers who are concerned about people training on their content without their permission.”

For us, that’s a compelling reason, the connection we have with our creative customers and the connection we have with our enterprise customers to say, “Is there another way we can achieve this goal that respects these potential issues and still delivers the value to the customers?” But we are an innovative company, and we’re not going to be a company that says we’re going to ship something that nobody wants but is safe. We know that’s the easiest path to irrelevance. And so that’s never been a goal by itself is to just optimize for safety. We have to also make sure we’re delivering product value. And that balance is really important. So the hard part is, can you do both? And that’s what’s fun about Adobe, is we tried to do both. We’re up for the task of seeing if it’s possible to both lead on innovation and do it the right way. And that’s what’s been fun.

Alright, so here’s the Decoder question. You already foreshadowed it. You’ve been through a lot. The industry’s changed, you’ve got some core values. You have a very important role to play in the product process and the decision process of the company.

I’m so excited for this question now.

I know. This is a lot of hype. This is the whole brand.

So much hype.

How do you make decisions? What’s your framework to make decisions?

When I have important decisions that come to me, I have… someone asked me the other day how I resist decision fatigue because what you do at the point where I’m at is you make a lot of decisions every day. Every meeting is basically a decision, right? Because that’s why you’re there. Otherwise, people don’t know why you’re there.

And I find it energizing because what I feel like I’m able to do is move things along, getting to a result. So I’d like the part of being able to help people understand all the factors and then let’s move on. And so I have a system I made up where I call it pre-commit, commit, and revisit. It’s my decision-making framework. 

The pre-commit stage is wide open. Like, I want to hear everything from everybody, all the stakeholders, all the points of view. I want all the feedback. Maybe I have a thesis on what I think the right answer is, but I’m just listening, and we’re just gathering all that information.

Then we think about it, and then we say, “Okay, after all of that, we’re going to go in this direction.” Now, that direction may not make everybody happy. And frankly, [it] for sure will not make everybody happy because I’m picking a side, I’m deciding. And I think deciding is important. That’s the commit stage, though. And what’s important about that is we’re all in on whatever that was. 

Because we heard a lot of things, we thought about a lot of the factors, and we’re all in. I’m not interested in reopening that decision. A month later, two months later, and someone’s saying, “Well, I still think maybe we should go back” and “Are you sure?” I’m like, no, we decided. We’re all in, we’ve committed. Because you need to move forward. You have to just move in any direction. Any direction is better than no direction. Even if it’s the wrong direction, you’re going to learn something that this was the wrong decision, and then you can say [in the] revisit stage, “Well, that didn’t work out at all. Let’s go do something else.”

But the worst thing you can do is nothing. And so, for me, the commit stage is really important. And at a big company, you can spend a lot of time in analysis paralysis. You can spend a lot of time just sitting there saying, “Well, I need to hear more, I need to hear more,” and everyone has a vote, and they have a vote forever. And so that commit stage, really important.

But as long as you had a good pre-commit stage, and everyone felt like they had their opportunity to share their view, I find that commit stage goes better. And then the revisit stage is really important for me. And I don’t do that quickly. So, for me, it’s like a year later, two years later, how did it go? Right? Is this still the right decision? Did it work out? I really believe in postmortems and reevaluating things. 

We have so many priorities as a company. It’s really important to let go of the ones that didn’t work and not just keep doing them because of inertia. So having an active process where you go back and look at a decision you made and say, “Did that work out and is it still good? Should we double down? Should we reinvest? Or should we say, ’Forget it, that’s not going anywhere’?”

Alright, so obviously I’m going to put this framework into practice against the news this week, which was, 15 months ago, Adobe decides to buy Figma for $20 billion. There is a lengthy review process, both in Europe [and] there’s a lot of rumblings from the United States Department of Justice. I would say deal review on both sides of the Atlantic proceeding in different ways with different success rates, however that’s going.

But this week, Adobe and Figma call off the deal. They say, after 15 months of review, we don’t think we can go forward. The European Commission is basically celebrating, saying we preserve competition. Walk me through it. Everyone knows that this review is coming. I had Dylan Field on Decoder right when the deal was announced. I said, this review is coming. He said, we know. Fifteen months later, you obviously revisit it and said it’s not working out. How did you make that call?

So, it’s actually a great example. I hadn’t thought about it within my frameworks explicitly, but it’s perfect, actually. We spent a lot of time, as you would imagine, before making the decision to go forward — thinking through the competition issues and what those would be and understanding, to the extent everyone understands the details of this, we had a product called Adobe XD that was a product design tool that we had started five to seven years ago. Obviously, we were well aware of that.

And Figma was the leader in that space, and we had tried and failed with our tool. It was only making $15 million of standalone revenue. We hadn’t invested it… Actually, at the executive team level, I didn’t even know it was still alive at the time. We don’t even talk about it. It was just sort of dead to us, but it was on maintenance mode, and we were just serving out some enterprise customers who had already contracted for it. So we just left it alone.

By the end of it, I think today there are less than five people working on it just to get rid of bugs and address security issues. So we looked at that, and we really pressure tested, is this something that we felt like could stop us? And like, no, this product doesn’t exist. It has no share, and Figma’s the leader, and we’re out and it’s been dead. So we felt good about being able to move into the interactive product design space because we had exited the product design space with a failed XD.

And we think that’s appropriate for businesses to try organically, fail organically, and then say, “Well, let me look at it inorganically.” And we think that’s a healthy way for businesses to conduct their business. So that was the pre-commit state, really pressure testing that whole question. And then obviously we decided to go forward, right, based on those facts. The last, you know, whatever it’s been, we started to announce it September 22nd, so 14 months or so, we’ve had a lot of interaction with the regulators, and they’ve been very focused on the newer doctrines of antitrust law that say that future competition is a critical part of the antitrust analysis.

So that’s the potential that we could go back into product design, even though we had exited it, and then the potential that Figma could go into our space. And that’s what they were really focused on in the regulatory process. So that’s what we’ve been talking about for the last 18 months is, how do we prove… what’s the evidence we can show? 

And we have a lot of great evidence. I would actually argue our evidence got stronger over the last year, our economic evidence. So typically, antitrust cases are defined by economics, and they’re defined by customers, right? And in our case, when you look at the economic data, you don’t see any overlap between Figma and Adobe’s customers, which was powerful. We saw positive customer sentiment, really no competitor or customer complaints about the deal that used to also be a key fact. 

So we felt good about the basic facts, but this future competition argument that was something that they continued to focus on, and this is all public, right, because they published their statement of objections, they published their provisional findings, the CMA, the UK authority, and the EC, both of them published these findings, so nothing I’ve said so far is secret. They’re really focused on those two things. And so, as a business, we got together with Figma and just said, “Looking at the road ahead and the timing and the tenor of the conversations we’re having, this is probably a good time to stop.”

And you did that before there was actually an enforcement action, right? You saw an enforcement action coming, and you said, “Look, we’re going to call this off because we don’t want to go through that fight.”

Correct.

So I just want to compare and contrast that to, say, Microsoft and Activision Blizzard. Microsoft announces a huge deal. It’s by Activision. There’s an enforcement action. They fight vigorously. They come to some agreements and concessions. They basically fight it through. You decided not to do that. Why? You just thought you were going to lose… What was the reason to not have the whole fight?

So we have been fighting. I would like to say that my team of lawyers has been doing nothing but fighting.

Yeah. But you didn’t get to that official, right, where we’re doing the, for this audience, I’ll call it a trial. It’s not a trial, but you know what I mean, the official proceeding.

I don’t know how many antitrust geeks you have in your audience, but—

Every day it grows. It’s copyright antitrust. You would not expect it, but every day, it’s growing.

… the way that works in Europe is quite different from the United States. So the United States, the Department of Justice, who’s investigating the case for us, they have their own timeline. They bring a case when they want, but there’s no statutory requirement that they do anything by any time, right? In Europe, it’s quite different. In Europe, as soon as you go into what they refer to as phase two… so phase one is an investigatory phase, and then phase two is more of an analytical phase. 

So as soon as you go into phase two, which we went into in July, June and July, with both the UK and the European Commission, they have a schedule that’s prescribed by statute by when they’re going to come through and have a decision. And in our case, the decision was [scheduled for] February 25th. That’s when they would be officially deciding. And all along the way, there’s a set of hearings you would have, and then they would give you findings, and then you would combat the findings, and then they would give you more findings. But they’re all public, right? They tell you exactly what they’re thinking. They tell you what their concerns are. 

From that perspective, I appreciate the transparency because you know exactly where they are, and you know what’s working, what’s not working, in terms of the arguments you’re making. And you get to make your arguments. And so we know fairly well what the arguments they are that they’re making, and we understand what evidence we’ve been providing, and we’ve seen what has been persuasive and what has not been persuasive. And then we get to sit there and say, “Well, do we think that it’s worth just to continue to fight this because we could keep going, we could keep going forever on this.” And both of us looked at it and said, “It’s not worth it.”

We’re both very successful companies. We both have extremely exciting opportunities ahead of us. That’s why we wanted to acquire Figma. We’re very excited about them and their opportunity, but they have a lot of opportunity for themselves. We obviously have Adobe Express and Adobe Firefly and our digital marketing business and all those other opportunities in front of us. And so you just ask the question: where should we be spending our time given what we see as a fairly consistent position being taken by the regulators on their version of the facts of the case?

Yeah, let me just make one more comparison to Microsoft, and then I really just want to talk about copyright law for the rest of the conversation. But Microsoft really wanted Activision. Like, at one point, it was almost confounding how badly they wanted this deal to close, even as it seemed like the regulators in the world really wanted it to stop. And they had a lawsuit in this country.

You know, the UK CMA and the EU do not always get along. I’m told that there was some sort of Brexit situation there. They’re fighting this fight on multiple fronts — a proper legal proceeding here, and then they’re making tons of concessions in Europe about game streaming and who can stream games and where the titles will go.

Did you come to a point where you thought, “Okay, these are the concessions we would make, and these are the concessions we won’t”? Did you ever consider making those concessions? Did you just walk away before that?

On December 18th, the CMA published what we refer to as a remedies notice, and that would be our response to them saying, “Hey, here’s what we would do from a remedies perspective.”

And so that’s public again. And we said, based on the way they were constructing this future competition theory, what would we do? And what would they do? We didn’t really see any kind of remedy that would address it. The way they’ve constructed the argument, there’s not a remedy that would make sense to address the issues that they’re raising because they’re raising future issues, future competition. 

So the only way to solve a future competition issue that someone might do something is to not do the deal. That’s essentially what they were telling us. So it did not appear in our case. The remedies was an option that they were considering for a way to resolve our situation. And we saw that, and we continue to believe that the merits of the case, as for all the things I had said before, we think the facts were on our side. 

But again, we stared ahead to the next three months. I think one of the most important things maybe we didn’t succeed in talking to the governments about is [that] focus is really important at a company. There are only so many things you can do well. If you try to do everything, you’ll fail at everything.

And we say that because the argument that the government was making was essentially saying to us and Figma, “You guys can do everything. So we assume you will do everything, and therefore, Adobe, you’re going to go do this, and Figma, we believe you can do that” and whatever. “Figma is going to build Photoshop, or whatever.” They have 800 people, but then they’re going to somehow magically do everything. We tried to explain to them that the focus of the company is so important, and Figma is very clear about what their focus is. It’s Figma design and it’s FigJam, which is the whiteboarding tool, and then you want to build a developer extension that allows you to generate code out of your product design. So very focused on what their path forward is. And we have our own path: Adobe Express and Adobe Firefly and our NTX, this new product we’re calling Adobe GenStudio. That’s where we want to spend our time.

And so every moment where we are choosing what we’re going to do, we’re going to spend our time on the key priorities we have. Like for me today. Today, I said to myself, “I really want to make sure people understand AI and copyright.” Like, I think that’s helpful. So I decided to spend an hour with you to talk about AI and copyright. 

I can tell you’re imposing a hard pivot on this conversation right now. You’re getting the flag from your own lawyer in the corner. Let me just ask one more because this is, I think, really important. It cuts to the heart of it. Dylan Field, CEO of Figma, gave an interview [on Dec. 18th]. He said the enforcement climate for antitrust is different now than it was a year or so ago when you launched the deal. That is true, right?

And we’ve seen Lina Khan and the FTC in this country really go after some things, not as successfully as the Europeans. We’ve seen the Europeans go after some things, extract some concessions — literally, in the last week, Epic won the antitrust case against Google. Google settled its case with the state AGs. There’s just a lot more enforcement activity happening around the world in many different ways. Has that changed your perception of how you should do deals, how you should think about deals, how you should evaluate them in that pre-commit stage?

I think that you have to understand that the regulators are going to be aggressive. They are so interested in tech that they don’t mind bringing a case and losing it. They posted that publicly. That’s not an issue for them. And so when you think about the… If you’re doing an M&A, if you think about the acquisitions you’re going to do, you have to be really thoughtful about the likelihood that there will be an enforcement action.

And also you have to really think through this future competition question that they’re using as a new doctrine in antitrust law. It hasn’t been the law in the United States, and it’s still not the law in the United States, but you have to think about it as you go forward because you’re going to be in it for, as you saw for us, up to 18 months, maybe longer, right?

I mean, I believe in Microsoft Activision, the FTC is appealing their loss. And so that’s still going on. So you really have to think about your decision. And I would say, the government should also be thinking about the consequence of that type of enforcement because M&A, I think, and I think most of us think, is good for the economy. It’s good for innovation. It’s good for jobs. Adobe has built itself on organic innovation and inorganic innovation. We’ve done both, and we’ve grown to be successful.

We have 30,000 employees. They have salaries and benefits, and they contribute to the world. And we have technology that millions of people build careers off the technology that we provide. And I would say Adobe has been net good to the world and the economy. And we would not be where we are without the ability to do inorganic acquisitions. And so we really need to make sure as governments that we’re having the ability to strike the balance between ensuring competition is preserved and innovation is preserved through antitrust laws and ensuring that companies can continue to innovate both organically and inorganically.

I’m going to grant you your pivot. You’re setting me up for a really good segue to copyright law. Thank you, I appreciate you.

You’re welcome.

Antitrust law: should we protect future competition or not? Should we allow more M&A or not? How much competition should there be in a given economy? At least the parameters of those questions are known, right? There’s a body of antitrust law, and it waxes and wanes, and maybe some regulators are more aggressive than others, but it’s like a set of known legal principles.

Where copyright law is today with generative AI feels like totally novel problems with no subtle answers and almost in a zero-sum way, right? If we allow LLMs to go wild with training data, we will absolutely reduce the market for human-generated works. That’s just the way it’s going to go. 

Adobe has Firefly, the foundation model. Your customers are also very loud creatives who are very opinionated about this stuff. How are you thinking about copyright law in this way? You said earlier you realize you had to start thinking about it differently, not just from a protecting IP perspective but as foundational questions about the law. Where have you arrived?

We think that the law itself will evolve as generative AI evolves. And so we understand that. And there’s a lot of litigation going on, as you know, class action lawsuits being brought against the LLM providers for copyright issues saying you can’t train off the web. And so we see all of that.

And we know the law is going to be different in Europe, and it’s going to be different in Japan, it’s going to be different in China. So it’s not even going to be one law that you’re going to be able to rely on to say, “I have a green light to train my model any way I want to.” So understanding all of that and knowing that our creators themselves have these concerns, we decided to train Firefly on our own Adobe Stock content and other licensed work from the rights holders to build our model. And that was a computer science challenge because, for AI to work, you need data. And for AI to work well, you need a lot of data. So the more data you have, the more accurate your AI will be, and the more it will have all the different kinds of styles you want, whether it be cinematic or natural or graphic or illustrative if you’re doing text-to-image. So you need a variety of data in addition.

And the more data you have, the less biased your AI will be. So the breadth of the data will help reduce the bias. But a smaller sample set will naturally have more bias because it’s trained and learned from smaller things, so you need data. You need access to data. And that’s the tension. So we had to go to our AI research team and say, “Can we build a competitive model without going to the web?” That was the challenge.

And so we have decades of image science expertise plus decades of AI expertise. And they worked really hard to understand how to construct the model, Adobe Firefly, that could be competitive with all the people out there who had access to more data than we did because they were just training off the web. And we feel really good about where our first model was, which we launched in March, but we were even more excited about the second version of the model, which we launched at Adobe Max a month or so ago. And we feel that one is better than our competitors’ and yet still adheres to those copyright principles.

Now, the good news about the choice we made on copyright is that it respects our creative customers who are concerned about this. And enterprise customers were very excited to know that they’re going to be able to use an image generative model that doesn’t have IP issues, doesn’t have brand issues, isn’t trained on unsafe content, because all of that has been either not in the database at all in the beginning because of the way we trained it or we can use content moderation to address it before it even gets into Firefly.

Enterprises have been very interested in using the Adobe Firefly version of text-to-image generation because of that choice we made. So it’s kind of cool to be able to do what we think is the right thing and also do the right thing for the bottom line.

In June, Adobe offered to cover the legal bills of anyone who gets sued for copyright infringement if they use Firefly. I’m assuming a question of that amount of liability comes to your desk. When you looked at that, do you say, “Well, look, I know where this was trained. It’s fine. You can do it”? Or is that more, “I think the risk is manageable”? 

You know what was cool about this whole process was it’s so important to the company… This is why I talk a lot about executive focus. We had a weekly meeting with our legal team that I was in from February through probably a month ago, where it was me, the copyright lawyer, the product lawyer, the sales lawyer, the privacy lawyer. All of us met every week because we were just trying to figure out [where] the law was going to be and how to navigate it. We had our AI ethics person there. What are we going to do about training? How are we going to deal with bias? What are we going to do about indemnification? So all of these issues came up naturally because, once we realized we were training it this way, then the next question was, “Well, when we go to enterprises, what are we going to protect? And are we going to stand behind it?”

The answer for us was, “Of course we’re going to stand behind it.” We know how it was trained. So we’re willing to give an indemnification. What’s good about our indemnification is it’s very little risk you’re going to get sued for an IP issue because of how we trained it. So it’s good for us. But it’s also good for the customers because if you get an indemnification from someone who has a model that still has IP issues, you may be able to get someone to indemnify you, and they will, but you’re still getting sued. And that’s not fun.

And so it’s still a competitive advantage, we believe, to have trained it this way, and then being able to offer it to enterprises with the full indemnification that you mentioned because not only do they feel good that we’re standing behind it but they also know because of the way we trained it, there’s less likely to be a risk.

There’s something in here that I’m struggling to articulate. Hopefully you can help me out. You’re out in the market. You’re indemnifying customers. You’re selling the products. Every week, a new Firefly-based feature ships in Lightroom or Photoshop. The downstream of the law is happening at the product level. And then I look back, and I say, “Well, all of this falls apart if one fair use lawsuit goes wrong, like Stability loses to Getty, and then everyone has to reevaluate everything they’re doing.”

The copyright office is evaluating whether AI work should be copyrighted at all. You have testified before Congress that you think it should. We’re all the way at the base level of what should get copyright protection while you are out in market selling AI products. How do you think that resolves?

I think there are a couple of things that are being litigated here that we should unpack. One is, what can you train on? And then, can you get a copyright on the output? So the input and the output are separate questions. And I think both—

But you understand why I’m saying I can’t quite articulate it. That’s the whole thing. Like what goes in— 

No, I agree.

and what goes out, both completely up in the air.

They’re fundamental questions. Absolutely. And then when you look at, again, as I was mentioning, around the world, you’re going to get different answers. You may win in the United States, but you may lose in Europe. And then, what does that mean? And the EU AI Act that was passed in the first stage, there are more details to come, but even in there, they said that you have to obey the EU copyright directive. It’s not totally clear what that means for AI, but it could easily mean that you’re going to have to obey the copyright law and not train off the web in Europe. It could be easily an interpretation. So my only point is winning a fair use case in the United States may not even help you somewhere else.

So the good news about what we did, so we look at the fair use cases, and we say we definitely see the fair use argument on a technical level. We understand the science of AI, we understand what you’re copying, what you’re not copying, and understand the transformative use that would have to happen. You’re not really taking an image, and you’re not really moving it to the output.

The output of the model is not really a copy of anything it was trained on. If it is, that was a bug. It’s not supposed to be. It’s supposed to be creating new work, and it’s just learning from a computer science perspective data about the images, not the images themselves. It doesn’t even understand what the images are. And you know that because, as an example, it always gets text wrong. Generative AI always gets text wrong because it has no idea it’s words. There’s no actual cognitive understanding in these generative models. They’re not intelligent. It thinks these words are just symbols, so you’re always going to get this misspelling issue.

These things that are coming out are not really the copyright… So we see the fair use argument. And we definitely think that one to two years from now or three years from now, if it goes people appealing things to the Supreme Court, you could see fair use come out and say, “Hey, you are allowed to scrape it.” It’s possible. 

Wait, can I just offer you the, I hear the pushback on the fair use argument from other folks all the time. There are four factors. There’s the amount of the use, the nature, the purpose of the use, and then the last one is the effect on the market for the original works. And in this case, with generative AI, it feels like you’re going to destroy the market for the original works. You’re going to destroy the market for human creators. And no one knows how that argument is going to go inside the classic fair use analysis.

We totally agree with that as a danger. And that’s why we want to make sure we have ways to establish rights that are not maybe even copyright rights for creators so they can protect themselves from the economic loss AI can bring. Before I get to that, I think there are four factors. We don’t know how it’s going to turn out. The Warhol case clearly showed the Supreme Court is interested in the economic factor as part of the fair use analysis, but there is still a threshold question of what got copied and did it get copied. You have to have a copy for copyright law. 

So I can see both sides of this. And I’m just saying, you could easily see the fair use cases going the way of the AI models just because of the science, but then you could see a judge looking at this thing and saying from an “is this the right thing to do” perspective — and that’s what fair use is, it’s an equity-based analysis — and saying, “Hey, we’re going to try to correct the harm here, and we’re going to expand fair use to cover it.” It’s possible. Could go either way.

The nice thing about what Adobe did is we said, “We’re sidestepping the whole copyright issue by trading on licensed works.” So no matter what happens in those class action lawsuits, our model is fine. We don’t have any concerns. It’s one of the reasons we chose it because we wanted to have the stability in our product. We’re not going to have to rip Firefly out because of some court case, so that isn’t important to us. That was a choice we made back in March.

Again, harder to do it that way from a science perspective, but our engineers were up to the task, and we feel like they pulled it off beautifully, so that’s important. On the second part of your question, on the output of AI, is it copyrightable? Our position has been, when we stare at this, we don’t think a generative AI model, typing in a prompt and having generative AI create the image, we don’t think that output is copyrightable by itself because we think that the last step of expression is being done by the AI, not you. You’re typing in your prompt, you’re like “pink bear riding a bicycle.” The AI is choosing, in the first instance, what kind of bear, the shade of pink, all the things that are supposed to be the expression that the artist is supposed to have in order to get a copyright, so we think that just typing in a prompt is probably not going to create a copyrightable expression.

It is possible in the future that you can create a very, very detailed prompt, and they may even look at the prompt and say, “Is that copyrightable, because I put so many words in it and then I control the output so much that the thing that came out is almost exactly what I envisioned in my mind?” I’m not sure where...

Yeah. And then you can copyright the prompt engineering side of it.

Right, and maybe the output. I don’t think we’re there yet. I’m not even sure that holds up. What I do say is that once you get your output, you can still get a copyright by putting it into Photoshop and adding your own expression or whatever your tool is, but we have one. Add your own expression, and now it’s your work. That, I think, is how you can still take comfort that you can get a copyright in a work that is based on generative AI. You still have to add something to it on top. And most of us will. If you’re a creative professional, you’re never satisfied with what comes out of one of these generative AI models because it’s not exactly what you wanted. You’re always going to make it whatever your vision is. We refer to it as the first step in the creative process, and all the other steps are going to be copyrightable. So we feel like that piece could get resolved just by the people using the technology the way they would normally use the technology.

Yeah, the problem is that generative AI is not limited to use by creative professionals who know how to use Photoshop. It democratizes creation in a massive way, and a lot of people aren’t going to take a second expressive step to make sure that it’s… They’re going to fire off tons and tons of content, believe that it’s theirs because they made it, and then file DMCA requests against whatever platform. 

That’s like the real danger here, right? The generative AI models can just flood every distribution pipe that exists with garbage, and we have no way of discerning it. I know there’s stuff Adobe does, like the Content Authenticity Initiative, which I want to talk about. There are these metadata ideas. But just in the base step, where do the copyrights come in? Where do they go? What is allowed? What is not allowed? Do you have a position there that at least gives you clarity as you make all these decisions?

I think that we do not believe just typing in a prompt is going to create a copyright. First of all, right now, this statute requires human expression anyway, and so that’s going to be a problem for anyone in the United States who wants to get a copyright on AI-generated work. The one thing I want to come back to that we talked about is the economic harm, so the place that we see… So, people just generating things off the model, I think they’re not copyrightable, and it is what it is.

I think that where we worry about, on the creative side, is this idea of style. And so we really feel like there’s this potential, and we’ve actually had artists already come to us and say this is happening, where people will say, “I’m going to create a painting in the style of Nilay based on all the work that you’ve done that my model was trained on.” And now I can create something that looks just like something you painted digitally. And I can go sell it. People say, “Great, I’ll spend $10,000 for something Nilay creates, or I can go buy something that looks just like him for a buck.” That’s the economic harm that we really see as an issue. There’s no such thing as style protection in copyright law right now. You can’t protect that. That’s not a concept. 

So we’ve introduced, in the same set of testimony you referred to earlier, this idea of a federal anti-impersonation right. And the idea of that is it would give artists a right to enforce against people who are intentionally impersonating their work for commercial gain with the typical fair use factors, allowing exceptions. But the goal there is to say, “Hey, if someone is basically passing off themselves as having created your work because this is in the same style, it’s an impersonation of you, you should have a right, like copyright. You should be able to get statutory damages. You should be able to go off and go after those people and remediate that economic harm.” And so that act… we wrote a draft bill, we gave it to the Senate Judiciary Committee, they’re reviewing it. We’re talking to people about the value of style protection.

I think where we sit here at Adobe is we try to see out what we think the problems are going to be. We think that’s going to be a problem. We think that people are going to lose some of their economic livelihood because of style appropriation. And we think Congress should do something about that. They should address that.

Is that rooted in copyright in your mind, or is that rooted in trademark or another…?

It’s not like anything, but it’s closest to a right of publicity, probably closer, or trade dress, maybe. And then, of course, copyright. So I would say it’s some version of those three things, but when we sat around, again, in March, and we were thinking about the consequences of generative AI and text-to-image, we said, “We’re probably going to need a new right here to protect people.” So that’s when we said, “Let’s get out after it.” 

It’s sort of like — you mentioned the Content Authenticity Initiative — the same way we thought about this four years ago, when we saw where AI was going in terms of being able to generate deepfakes, we said, “We’re going to need to do something about addressing that problem, and let’s get together and figure out a solution for it.” And with content authenticity, the answer was a technological solution. But for this style issue, we think that’s probably a legislative solution. And so we really feel like, as a community, we need to think about the consequences of the technology we’re bringing to market and then address them in a proactive way that helps everybody and still ship the world’s greatest technology.

The shipping the technology — it seems like everyone understands how to do that. “Should we?” is the major question in the AI world. Let me ask you… Photoshop is like a great playground for hypotheticals when it comes to this. I take a photo of Donald Trump, and I say, “Put him on an elephant using generative AI.” I’m picking elephant because it’s a symbol of the Republican Party. And I take a photo of Joe Biden, and I say, “Put him on a donkey” — symbol of the Democratic Party. And I create a composite of these two characters using Photoshop. Does Adobe want that to happen? Do you think that’s okay? And that’s obviously a silly example. You can easily imagine vastly more destructive examples.

The way we look at all of the generative AI tools and almost every technological tool is that you want people to be able to do whatever their vision, their creativity, is. Ninety-nine percent of the users of Photoshop are making art or creative expression in their marketing materials, advertising campaigns, whatever it is. They’re just out there expressing themselves, and you want to encourage that, and you want to let people do the things they need to do.

If someone’s out there misusing a tool to create harm, your example may not be harm, maybe parody or satire or something, but not harm. But if someone were actually trying to create harm, like there were an image of [Ukrainian President Volodymyr] Zelenskyy saying, “Lay down your arms,” a deepfake of him, actually do harm, then the people who misuse the tool should be held accountable, no question, for causing harm. Just like any other bad actor who’s misusing something to cause harm.

So I always think that’s important to remember because most people are using this tool for good. If there’s somebody out there who says, “I can see the potential of this, and I can use it for bad,” you should address the person who’s misusing it — not address everybody who’s using it for good. However, what we said was the real problem, when you ask what Adobe wants, is we said because people are going to see how the ability to access all these generative AI tools and do amazing things and create realistic but fake images, it’s going to be easier for bad actors to create these deepfakes and deceive people. And then, looking ahead, fast-forward maybe even to today but four years ago when we were thinking about this, we said, “Well, now people can easily be deceived by these fake images, fake video, fake audio, and that can have real serious consequences for their livelihoods, if it’s just personal to you, or for the world, if it’s like something about the global environment.

And so we said, “Well, what can we do about that? Can you detect deepfakes? Is there a way just to say, ’Hey, I’m going to create an AI deepfake detector, and every time you see an image, it’s going to say, “This is a deepfake. Don’t trust it.’”” And so we again put our AI image scientists [on it]. They love me at Adobe because I’m always giving them homework assignments. And I said, “Hey, can you do this?” And the answer came back. It’s like probably 60 percent accuracy, so not consumer-grade. It’s always going to be very difficult to be able to detect if something’s a deepfake, and that technology’s always evolving. 

We said, alright, so given a world where you could possibly have things that look real but are not, what can we do to help? And we said, let’s make sure people who have something important to say, news or important other stories, that they have a way to prove what’s true. Because once everybody realizes that everything can be faked, these digital images can be manipulated or edited in a way to deceive them, they’re not going to believe anything anymore. It’s the doubt that is more dangerous than the deepfake.

And you see that today when you’re even looking at the Israel-Hamas [war], and you’re like, did this happen, did it not happen? Is it a real image, is it not a real image? Because you know everything can be manipulated, so you don’t believe anything you see and hear. And that’s very dangerous for a democracy or a society because we have to be able to come together and talk about the facts. We can fight about the policy; we have to be able to talk about the facts, but if we’re just fighting about whether the facts even happened, we’re not going to get anywhere. 

So the technology of the Content Authenticity Initiative, which we and a few others founded four years ago, it allows you to assign metadata to images. It’s like a nutrition label. And it gets associated with the images, and it goes with the image wherever the image goes. And a user, and it’s the public who gets to decide, can look at an image. They can see the provenance and then say, “Oh, okay, I believe it. This person went through the bother of proving what happened.” So you can capture information like who took the picture, when it was taken, where it was taken.

We have over 2,000 members in this initiative right now. We have companies like Sony and Leica, both of whom announced recently they’re shipping cameras with this technology called Content Credentials in the camera. Leica is already shipping that camera. It’s in the stores. And that means you can take a picture, turn on Content Credentials, and when you take that picture, it has that metadata right there. [You] move it to Photoshop, which also has Content Credentials, make your edits, it tracks the edits that were made. It ends up getting published to The Wall Street Journal, who’s also a member of this standard, and then, on their website, you can see it, you can see the icon, you can click on it and say, “Oh, I know what happened. Biden did meet Zelenskyy. I believe it because they proved it.” And now we have a way for people to prove it’s true in a world where everything can be fake.

So you have an amazing quote about the Content Authenticity Initiative about the image of the pope in a puffer jacket. You said the pope jacket was a “catalyzing event.” People saw this, they realized the problem is real, they started joining the CAI. It seems like other standards, we love covering standards here at The Verge, they get bogged down, there are politics. How’s the CAI going? Is the pope really bringing everyone together?

It’s going really well. I think we’ve set records. We had a symposium last week at Stanford where we brought in all the members of the organization and the standard together. And the first time we did this four years ago, there were 59 people. This time, there were 180 people. They’re representing all these thousands of organizations. 

The standard organization that we formed to build this technology is called the C2PA. After just a year, we had the 1.0 version of that technical specification, and we’re already up to 1.4 in just four years. And that’s describing how to build this provenance technology for images, video, and audio, and how you use watermarking. It has all the technical specifications that anyone can go now and implement in their own products and services. And so it’s an open standard. It’s free to use. Adobe doesn’t make any money off of any of this technology, by the way. We’re just helping lead the coalition of people who are coming together. So I don’t know of another standards organization that has been more successful. And that’s because people were there, and that’s what was so exciting at the symposium last week. People are there because they want to make a difference. All these people, they have day jobs, but they’re in it.

Arm’s in there. Qualcomm’s in there. Qualcomm just announced that Snapdragon is going to be C2PA-compliant, so that’s an amazing step forward. We have all the media companies, as I mentioned. We have The New York Times and The Washington Post and AP and Reuters. It’s international. Organizations are doing it, obviously people like Microsoft and Getty, so just a breadth of kinds of companies who are all coming together and said, “Hey, we want to work together to address the negative implications of the deepfake-creation ability of generative AI because we all want to live in this society, and we need it to work.

So let me ask a really hard question there. I love the AI Denoise feature of Lightroom. I think it’s the best. I’m also like the world’s leading practitioner of “What is a photo?” hand wringing. Because I think the definition of photo is being stretched by smartphone cameras in really insane ways, to the point where there’s a Huawei phone that now just has a generative AI tool built into it, and after it trains on a bunch of your own photos, you can just generate yourself on the Moon, and the camera’s like, here’s a photo.

So the boundary of what a photo is is getting more and more expansive on the consumer side. Then you have an initiative like yours where you’re saying, “We’re going to authenticate photos at the moment of capture all the way through the chain until you’re looking at The Wall Street Journal on your web browser, and there’s a tag that says, here’s a provenance of this photo.

Somewhere in there is a whole bunch of judgment calls, right? We can increase the brightness of this photo. We can change the contrast. We can add a vignette. We can remove dust and scratches. The photo agencies have had rules like this for a million years. Here’s the limit of the editing you’re allowed to do. Then there’s stuff you can do now on smartphones that feels way beyond the limit but also seems, I don’t know, like people are going to make the argument they’re fine. In particular, I’m thinking of Best Take on the new Pixel phones where you shoot a whole bunch of frames of a group of people and then you can make them all look at the camera. Is that a photo in your mind? Does that get a prop? Like where’s the line for you? What is a photo?

The premise of the initiative for me is that we want to give people the ability to prove what’s true.

So is that true? So if I take 50 frames of a group of people, and I assemble a collage of them all looking at the camera, is that true?

So the way I’m answering that question is, this is for somebody who says, “I need you to believe this. This thing actually happened. And I’m going to record everything that happened to this image, all the stuff I did to it, that automatically happened, it’s in the metadata.” So my answer to you is not, “What is the technology capable of?” It’s “What do you want to do with the technology?” And the more you use the technology to blur the lines between real and imaginary, then the less you’re going to be able to prove what you did was true.

So if I were someone saying, “I’m taking some very important picture, there’s a volcano that exploded in Iceland, and I want to show you that actually happened,” I would probably use less of the artificial intelligence in the initial capture so I can show people that this happened. The original base image is true because I captured it this way. And then I can edit as much as I want to make it easier to see, but people will always be able to understand what the base image was, and they can see what your edits were. 

And one of the cool things that we’re doing to show this to consumers, we have a site called Verify.org. And on that site, if you bring your content credentialed image, you can use a slider, and the slider will show you the image you’re seeing on The Wall Street Journal. The slider will move back and forth, and you’ll see the original image. And then, just like that, you can say, “Oh, okay, I understand what the base image of reality was.”

If your base image of reality is generative AI, and it’ll show that in the metadata, then I’m not going to believe it. So if your goal was to be believed, my suggestion would be [to] minimize the use of AI in the initial capture in the Huawei phone or whatever it is you’re talking about because people will believe you less. And this is all about what we refer to as, it’s kind of like a dial of trust. And it’s up to the person taking the picture of where you want to put the dial. And if you want it to be maximum trust, you take it with a Leica camera that has no AI in it and has Contact Credentials and you put your name on it and the date and location, all that metadata, and that’s a high level of trust. Or you can say, “I don’t really care. This is about my cat. It’s going to Instagram,” you know, or whatever.

Can I ask you a really deep and meaningful philosophical question there? The idea that we will now create two classes of cameras: Leicas are very expensive. The Sony cameras that have this technology in them are for professionals at news agencies, basically. The social revolution of the past decade is smartphone cameras. That has democratized all kinds of… The social justice movement around the world is built on the back of smartphone cameras. Where do those things collide, right? If we create a class of cameras and production chains that we’re saying can be trusted in a world of AI, and then there’s the thing that is causing the chain, and it’s not there yet.

Yeah, we don’t see classes. We’re class-free here at Adobe. And everyone can join this initiative. And I think everyone should join this initiative. It is great to see Qualcomm say Snapdragon will be C2PA-compliant because that’s a step toward getting it on the smartphones, but we absolutely think all the smartphones should have the ability to add Content Credentials. There’s no reason they can’t. So this is just a question of whether or not all the endpoints participate in this standard or some standard like this, but I think it’s going to be a necessity for everybody if they want to be believed to have this ability, and there’s no reason why this technology can’t live everywhere.

Alright, so come back to my dumb question. What is a photo? If I take 50 frames, and I let an AI point all the faces at me, does that count as the truth? Is that a photo, or is that an AI-edited image?

I would say that my view would be if it does not accurately capture the reality of the moment, then it is not a photo.

Okay, you heard it here first. I’ve been, I mean, I truly struggle with this question. I have old cameras that I basically set on the shelf, and now AI Denoise and Lightroom has brought them back to life. It has enabled me to use them in new ways, and I’m taking photos of my daughter, and I love using these cameras and my old lenses and all this stuff, and I’m like, “Is this real?” I truly do not know the answer. I struggle with it every time. But then the output is so rewarding that I’m using it without a second’s hesitation. And I think there are the big questions about election disinformation and content authenticity and proving provenance. And then there’s this little question of, boy, this technology feels really good to use.

Yeah.

And making people apply a moral judgment to how they use technology when it feels that good is an almost impossible task.

I don’t think there’s a moral judgment, though—

Well, am I lying to people?

I think everything we’re doing… with the way that your brain works, right? Every image you see, first of all, everything you’re seeing happened a second ago, so it’s not even present. And then it’s all going—

Alright, now I brought you deep into philosophy.

Right, right, right. And then everything you’re seeing is being interpreted by your own cornea and then your own ability to interpret color and then what you’re looking at and what I’m looking at are totally different shades of blue. And so the whole thing is really kind of fictional, this idea that we’re all sharing a common visual reality. So I would not put a lot of stock into that, either. So your 50-person composite picture to me is certainly a version of reality that I would believe is accurate. It’s just that there’s going to be a slippery slope if generative AI is involved, that you could then easily manipulate it to be something, a pose, that didn’t actually happen, and then people will be deceived. And I’m saying, if I want to draw a bright line around, “I actually need you to believe this happened,” I would draw it farthest to the left before AI gets used. But there’s nothing wrong, obviously, with the other parts because it looks beautiful, and people are always doing that. And it doesn’t change the truth of the image. So we’re not even saying don’t use generative AI. We’re encouraging people: use generative AI on that picture, but tell people what happened to it. Did you remove a background? Did you remove a stranger that was occluding the view of the primary subject? That’s all great. As long as people can see what you did, then they can believe you.

So one of the things that’s really hard there is where you land the enforcement, where you land that mandate. There’s some stuff in Photoshop you cannot do, right? You cannot counterfeit money in Photoshop. I’ve tried. I was younger, and the software kept me from doing it. You just can’t do it. Is there stuff that you think Photoshop can’t do now or should not allow people to do?

Well, there’s always the, for all people in the image generation business, there’s the question in their law enforcement requirements about child sexual abuse material, and you don’t want to allow that to be generated. You don’t want to allow that to be proliferated on your server, and you have to report it to law enforcement if you detect it, and you should have ways of detecting that kind of material that’s being created. So that’s critical and important to address at all times.

And then anything that Photoshop is capable of that is going to break law, right, by the very nature of it. And that’s where the counterfeiting one came from is they engaged with the team and said, “Hey, we want to make sure this can’t happen.” So then we complied with the government, and Adobe will always comply with those kinds of requests when the government comes to us and says, “There’s something in this technology that is unlawful or really harmful [to] society,” then we’re going to take a hard look at it and say, “Let’s see if we can address it.” On the other hand, we are also on the side of innovation and freedom of expression, and so we’re not going to stop everything just to uphold a particular ideal or point of view.

Where do you put the enforcement, I think is the question that I keep coming back to. For example, I had AMD’s Lisa Su onstage with me at the Code Conference. I said, you can see how open-source models running on a Windows laptop completely evade all law enforcement, right? You can just see how you can make a bunch of rules and then someone has an open-source model, and they’re running it on Linux on an AMD chip, and the last person who can stop them is you. AMD has to stop them at the chip level. And she said, yeah, that might be true. You know, the model and the chip have to work together, but we’re open to it, we’ll figure it out.

For Adobe, as you move more things into the cloud, you have many more opportunities to do basically real-time content moderation. Maybe it’s on the prompt, maybe it’s on the output, maybe it’s what people are storing in their content libraries, or you could push everything back to the desktop where you see less and less of it. Where do you land the enforcement? Where do you think you have to do the checks?

We have to do the checks wherever the content’s being created. The jury’s out where four years from now this will be. Will it be on device? Will it be in the cloud? We have to adapt to where that is. Right now, it’s on the cloud. And so we understand what prompts you’re typing in, and we understand what kind of images are being created, and that’s where we have to do the check. I mean, if you’re uploading your images to our servers, that’s where we’ll do the check. That is still the current world we’re living in, and that’s our approach.

When you see that changing, do you think that happens more and more on device over time as computers get more powerful? Or do you think it happens more and more in the cloud as your business pushes more people to the cloud?

I think there will be a hybrid. I think that there’s definitely a lot of, if you can make this happen on a device, there are a lot of advantages to being on the device: the latency between typing the prompt and calling the image and getting it back. Having that on chip would be a better user experience, and that’s all we’re trying to do here is make sure we can have the best user experience. Absolutely, I think, in the future, if you’re able to do it on chip, it would be pretty interesting for us to be there, too.

Last question here, the AI conversation is way ahead of the law, right? There’s the AI Act in the EU, which is again, pretty nascent and just in its early stages, but it’s passed at least. Here in the United States, we have the executive order from the Biden administration, which is basically just a series of prompts themselves for companies to go think about things and some transparency requirements. It’s long, but it is not, you know, there’s an election coming up, executive orders come and go.

In any case, the industry is way ahead of the law with AI, especially when it comes to copyright law, especially when it comes to safety. What are your internal checks and balances? You talked about Adobe’s values at the beginning of this conversation. Are you holding all of your decisions against a set of values? Do you think the market needs the regulation in order to stay in line? Where’s the balance for you?

When we started our AI ethics program four years ago, we had set up principles for Adobe to follow, and those are accountability, responsibility, and transparency, which spells ART, which was really important.

Well done.

Thank you. And that’s really how we govern all the technology. So everybody who creates an AI feature at Adobe has to fill out an AI impact assessment, every engineer. And it says, “What is the impact of the feature that you’re creating?” And there’s a short form and a long form. And the short form is essentially, we’re choosing what font the user wants using AI, and we’re like, ship it. There can be no downside to that. So we want innovation to get out the door as fast as possible, it’s low risk.

And then, if there’s something else that we think about as higher risk, like Firefly or the way we’re building a model, then the AI Ethics Review Board itself, which is a diverse group of people who are cross-functional, so not just engineers, but engineers and marketers and communication and research, they’ll look at it and say, “Hey, on behalf of Adobe, this is sort of how we feel the technology should get built and should be shipped.” We also have some requirements that every AI feature that gets shipped has a feedback mechanism built into it so users can give us feedback because we know AI will not be perfect. 

And because of its black box nature, it will never be perfect. It shouldn’t be perfect because we want the black box. We want it to do unexpected things, but that means it could do unexpectedly wrong things. And so we need that feedback loop from the community to be able to tell us, “Hey, this thing that you’re doing now, it’s a little off course, go back and retrain it or put in some filtering to make sure we don’t see that.” So having that feedback mechanism as well built into our tools is also really important when we think about building AI. So that’s the transparency piece as well.

So those three principles are how we go about governing our own practical implementation. Again, we’ve been doing this for a while. Within the government, we know that there are some roles that governments should play in understanding this because there are values that societies will have that will be different from country to country, and Adobe wants to comply with the law.

It would be helpful if, when we interact with the government, they have an understanding of “This is the percentage of errors you can have in an AI” or “This is the percentage of bias you can have in a model” and set out those standards. We were meeting with NIST [National Institute of Standards and Technology] recently, and we were saying they would be a perfect place to set forward the standards by which a model should be able to safely operate in the view of the United States government and says, “This is where your model parameters should be from an output perspective.” If we comply with that, then we should be able to self-certify and ship and feel good about what we’re shipping. So we do believe there’s a role for the government to play in terms of setting out a standard by which we can attest to.

Yeah, well Dana, I could talk to you for hours and hours about this, I think, as you can tell, but you’ve given us so much time. I really appreciate it. Tell people what they should be looking for next if they’re interested in how AI and copyright law and the regulatory worlds collide. What’s the marker in your mind that people should be looking out for?

I think I know in our own tools, when I see in the labs, the innovation that’s coming both in video and audio and in 3D, and it’s just really going to… a year from now, the way you create will be so much different and so much more interesting than even the way you create today. So I would not be as focused honestly on the law. The law is exciting for me personally.

[Laughs]

But for all the people out there who’ve ever wanted to create but felt like they couldn’t because the tools are too hard or the expertise is too hard, it’s going to get a lot easier to let your inner child out and express yourself. And that’s a cool place to be.

That’s awesome. Well, Dana, thank you so much for coming on Decoder. We’re going to have to have you back again soon.

Absolutely. Thanks for the time. 

Decoder with Nilay Patel /

A podcast about big ideas and other problems.

SUBSCRIBE NOW!