AI

Women in AI: Anika Collier Navaroli is working to shift the power imbalance

Comment

Image Credits: Anika Collier Navaroli / Bryce Durbin / TechCrunch

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution.

Anika Collier Navaroli is a senior fellow at the Tow Center for Digital Journalism at Columbia University and a Technology Public Voices Fellow with the OpEd Project, held in collaboration with the MacArthur Foundation.

She is known for her research and advocacy work within technology. Previously, she worked as a race and technology practitioner fellow at the Stanford Center on Philanthropy and Civil Society. Before this, she led Trust & Safety at Twitch and Twitter. Navaroli is perhaps best known for her congressional testimony about Twitter, where she spoke about the ignored warnings of impending violence on social media that prefaced what would become the January 6 Capitol attack.

Briefly, how did you get your start in AI? What attracted you to the field? 

About 20 years ago, I was working as a copy clerk in the newsroom of my hometown paper during the summer when it went digital. Back then, I was an undergrad studying journalism. Social media sites like Facebook were sweeping over my campus, and I became obsessed with trying to understand how laws built on the printing press would evolve with emerging technologies. That curiosity led me through law school, where I migrated to Twitter, studied media law and policy, and I watched the Arab Spring and Occupy Wall Street movements play out. I put it all together and wrote my master’s thesis about how new technology was transforming the way information flowed and how society exercised freedom of expression.

I worked at a couple law firms after graduation and then found my way to Data & Society Research Institute leading the new think tank’s research on what was then called “big data,” civil rights, and fairness. My work there looked at how early AI systems like facial recognition software, predictive policing tools, and criminal justice risk assessment algorithms were replicating bias and creating unintended consequences that impacted marginalized communities. I then went on to work at Color of Change and lead the first civil rights audit of a tech company, develop the organization’s playbook for tech accountability campaigns, and advocate for tech policy changes to governments and regulators. From there, I became a senior policy official inside Trust & Safety teams at Twitter and Twitch. 

What work are you most proud of in the AI field?

I am the most proud of my work inside of technology companies using policy to practically shift the balance of power and correct bias within culture and knowledge-producing algorithmic systems. At Twitter, I ran a couple campaigns to verify individuals who shockingly had been previously excluded from the exclusive verification process, including Black women, people of color, and queer folks. This also included leading AI scholars like Safiya Noble, Alondra Nelson, Timnit Gebru, and Meredith Broussard. This was in 2020 when Twitter was still Twitter. Back then, verification meant that your name and content became a part of Twitter’s core algorithm because tweets from verified accounts were injected into recommendations, search results, home timelines, and contributed toward the creation of trends. So working to verify new people with different perspectives on AI fundamentally shifted whose voices were given authority as thought leaders and elevated new ideas into the public conversation during some really critical moments. 

I’m also very proud of the research I conducted at Stanford that came together as Black in Moderation. When I was working inside of tech companies, I also noticed that no one was really writing or talking about the experiences that I was having every day as a Black person working in Trust & Safety. So when I left the industry and went back into academia, I decided to speak with Black tech workers and bring to light their stories. The research ended up being the first of its kind and has spurred so many new and important conversations about the experiences of tech employees with marginalized identities. 

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?  

As a Black queer woman, navigating male-dominated spaces and spaces where I am othered has been a part of my entire life journey. Within tech and AI, I think the most challenging aspect has been what I call in my research “compelled identity labor.” I coined the term to describe frequent situations where employees with marginalized identities are treated as the voices and/or representatives of entire communities who share their identities. 

Because of the high stakes that come with developing new technology like AI, that labor can sometimes feel almost impossible to escape. I had to learn to set very specific boundaries for myself about what issues I was willing to engage with and when. 

What are some of the most pressing issues facing AI as it evolves?

According to investigative reporting, current generative AI models have gobbled up all the data on the internet and will soon run out of available data to devour. So the largest AI companies in the world are turning to synthetic data, or information generated by AI itself, rather than humans, to continue to train their systems. 

The idea took me down a rabbit hole. So, I recently wrote an Op-Ed arguing that I think this use of synthetic data as training data is one of the most pressing ethical issues facing new AI development. Generative AI systems have already shown that based on their original training data, their output is to replicate bias and create false information. So the pathway of training new systems with synthetic data would mean constantly feeding biased and inaccurate outputs back into the system as new training data. I described this as potentially devolving into a feedback loop to hell.

Since I wrote the piece, Mark Zuckerberg lauded that Meta’s updated Llama 3 chatbot was partially powered by synthetic data and was the “most intelligent” generative AI product on the market.

What are some issues AI users should be aware of?

AI is such an omnipresent part of our present lives, from spellcheck and social media feeds to chatbots and image generators. In many ways, society has become the guinea pig for the experiments of this new, untested technology. But AI users shouldn’t feel powerless.  

I’ve been arguing that technology advocates should come together and organize AI users to call for a People Pause on AI. I think that the Writers Guild of America has shown that with organization, collective action, and patient resolve, people can come together to create meaningful boundaries for the use of AI technologies. I also believe that if we pause now to fix the mistakes of the past and create new ethical guidelines and regulation, AI doesn’t have to become an existential threat to our futures. 

What is the best way to responsibly build AI?

My experience working inside of tech companies showed me how much it matters who is in the room writing policies, presenting arguments, and making decisions. My pathway also showed me that I developed the skills I needed to succeed within the technology industry by starting in journalism school. I’m now back working at Columbia Journalism School and I am interested in training up the next generation of people who will do the work of technology accountability and responsibly developing AI both inside of tech companies and as external watchdogs. 

I think [journalism] school gives people such unique training in interrogating information, seeking truth, considering multiple viewpoints, creating logical arguments, and distilling facts and reality from opinion and misinformation. I believe that’s a solid foundation for the people who will be responsible for writing the rules for what the next iterations of AI can and cannot do. And I’m looking forward to creating a more paved pathway for those who come next. 

I also believe that in addition to skilled Trust & Safety workers, the AI industry needs external regulation. In the U.S., I argue that this should come in the form of a new agency to regulate American technology companies with the power to establish and enforce baseline safety and privacy standards. I’d also like to continue to work to connect current and future regulators with former tech workers who can help those in power ask the right questions and create new nuanced and practical solutions. 

More TechCrunch

According to a recent Dealroom report on the Spanish tech ecosystem, the combined enterprise value of Spanish startups surpassed €100 billion in 2023. In the latest confirmation of this upward trend, Madrid-based…

Spain’s exposure to climate change helps Madrid-based VC, Seaya, close €300M climate-tech fund

Forestay, an emerging VC based out of Geneva, Switzerland has been busy. This week it closed its second fund, Forestay Capital II, at a hard cap of $220 million. The…

Forestay, Europe’s newest $220M growth-stage VC fund, will focus on AI

Threads, Meta’s alternative to Twitter, just celebrated its first birthday. After launching on July 5 last year, the social network has reached 175 million monthly active users — that’s a…

A year later, what Threads could learn from other social networks

J2 Ventures, a firm led mostly by the U.S. military veterans, announced on Thursday that it has raised a $150 million second fund. The Boston-based firm invests in startups whose…

J2 Ventures, focused on military healthcare, grabs $150M for its second fund

HealthEquity said in an 8-K filing with the SEC that it detected “anomalous behavior by a personal use device belonging to a business partner.”

HealthEquity says data breach is an ‘isolated incident’

Roll20 said that on June 29 it had detected that a “bad actor” gained access to an account on the company’s administrative website for one hour.

Roll20, an online tabletop role-playing game platform, discloses data breach

Fisker has a willing buyer for its remaining inventory of all-electric Ocean SUVs, and has asked the Delaware Bankruptcy Court judge overseeing its Chapter 11 case to approve the sale.…

Fisker asks bankruptcy court to sell its EVs at average of $14,000 each

Teddy Solomon just moved to a new house in Palo Alto, so he turned to the Stanford community on Fizz to furnish his room. “Every time I show up to…

Fizz, the anonymous Gen Z social app, adds a marketplace for college students

With increasing competition for what is, essentially, still a small number of hard tech and deep tech deals, Sidney Scott realized it would be a challenge for smaller funds like…

Why deep tech VC Driving Forces is shutting down

A guide to turn off reactions on your iPhone and Mac so you don’t get surprised by effects during work video calls.

How to turn off those silly video call reactions on iPhone and Mac

Amazon has decided to discontinue its Astro for Business device, a security robot for small- and medium-sized businesses, just seven months after launch.  In an email sent to customers and…

Amazon retires its Astro for Business security robot after only 7 months

Hiya, folks, and welcome to TechCrunch’s regular AI newsletter. This week in AI, the U.S. Supreme Court struck down “Chevron deference,” a 40-year-old ruling on federal agencies’ power that required…

This Week in AI: With Chevron’s demise, AI regulation seems dead in the water

Noplace had already gone viral ahead of its public launch because of its feature that allows users to express themselves by customizing the colors of their profile.

noplace, a mashup of Twitter and Myspace for Gen Z, hits No. 1 on the App Store

Cloudflare analyzed AI bot and crawler traffic to fine-tune automatic bot detection models.

Cloudflare launches a tool to combat AI bots

Twilio says “threat actors were able to identify” phone numbers of people who use the two-factor app Authy.

Twilio says hackers identified cell phone numbers of two-factor app Authy users

The news brings closure to more than two years of volleying back and forth between some of the biggest names in additive manufacturing.

Nano Dimension is buying Desktop Metal

Planning to attend TechCrunch Disrupt 2024 with your team? Maximize your team-building time and your company’s impact across the entire conference when you bring your team. Groups of 4 to…

Groups save big at TechCrunch Disrupt 2024

As more music streaming apps and creation tools emerge to compete for users’ attention, social music-sharing app Popster is getting two new features to grow its user base: an AI…

Music video-sharing app Popster uses generative AI and lets artists remix videos

Meta’s Threads now has more than 175 million monthly active users, Mark Zuckerberg announced on Wednesday. The announcement comes two days away from Threads’ first anniversary. Zuckerberg revealed back in…

Threads nears its one-year anniversary with more than 175M monthly active users

Cartken and its diminutive sidewalk delivery robots first rolled into the world with a narrow charter: carrying everything from burritos and bento boxes to pizza and pad thai that last…

From burritos to biotech: How robotics startup Cartken found its AV niche

Ashwin Nandakumar and Ashwin Jainarayanan were working on their doctorates at adjacent departments in Oxford, but they didn’t know each other. Nandakumar, who was studying oncology, one day stumbled across…

Granza Bio grabs $7M seed from Felicis and YC to advance delivery of cancer treatments

LG has acquired an 80% stake in Athom, a Dutch smart home company and maker of the Homey smart home hub. According to LG’s announcement, it will purchase the remaining…

LG acquires smart home platform Athom to bring third-party connectivity to its ThinQ ecosytem

CoinDCX, India’s leading cryptocurrency exchange, is expanding internationally through the acquisition of BitOasis, a digital asset platform in the Middle East and North Africa, the companies said Wednesday. The Bengaluru-based…

CoinDCX acquires BitOasis in international expansion push

Collaborative document features are being made available inside Proton Drive, further extending the company’s trademark pitch of robust security.

In a major update, Proton adds privacy-safe document collaboration to Drive, its freemium E2EE cloud storage service

Telegram launched a digital currency called Stars for in-app use last month. Now, the company is expanding its use cases to paid content. The chat app is also allowing channels…

Telegram lets creators share paid content to channels

For the past couple of years, innovation has been accelerating in new materials development. And a new French startup called Altrove plans to play a role in this innovation cycle.…

Altrove uses AI models and lab automation to create new materials

The Indian social media platform Koo, which positioned itself as a competitor to Elon Musk’s X, is ceasing operations after its last-resort acquisition talks with Dailyhunt collapsed. Despite securing over…

Indian social network Koo is shutting down as buyout talks collapse

Apiday leverages AI to save time for its customers. But like legacy consultants, it also offers human expertise.

Europe is still serious about ESG, and Apiday is helping companies comply

Google totally dodges the question of how much energy is AI is using — perhaps because the answer is “way more than we’d care to say.”

Google’s environmental report pointedly avoids AI’s actual energy cost

SpaceX’s ambitious plans to launch its Starship mega-rocket up to 44 times per year from NASA’s Kennedy Space Center are causing a stir among some of its competitors. Late last…

SpaceX wants to launch up to 120 times a year from Florida — and competitors aren’t happy about it