AI

Securing generative AI across the technology stack

Comment

Pixelated data representation with red physical cube shapes on a black artificial background, tile-able composition
Image Credits: matejmo / Getty Images

Connie Qian

Contributor
Connie Qian is a vice president at Forgepoint Capital. She focuses on early-stage enterprise software companies in security and adjacent sectors, including AI/ML, infrastructure software, and fintech.

Research shows that by 2026, over 80% of enterprises will be leveraging generative AI models, APIs, or applications, up from less than 5% today.

This rapid adoption raises new considerations regarding cybersecurity, ethics, privacy, and risk management. Among companies using generative AI today, only 38% mitigate cybersecurity risks, and just 32% work to address model inaccuracy.

My conversations with security practitioners and entrepreneurs have concentrated on three key factors:

  1. Enterprise generative AI adoption brings additional complexities to security challenges, such as overprivileged access. For instance, while conventional data loss prevention tools effectively monitor and control data flows into AI applications, they often fall short with unstructured data and more nuanced factors such as ethical rules or biased content within prompts.
  2. Market demand for various GenAI security products is closely tied to the trade-off between ROI potential and inherent security vulnerabilities of the underlying use cases for which the applications are employed. This balance between opportunity and risk continues to evolve based on the ongoing development of AI infrastructure standards and the regulatory landscape.
  3. Much like traditional software, generative AI must be secured across all architecture levels, particularly the core interface, application, and data layers. Below is a snapshot of various security product categories within the technology stack, highlighting areas where security leaders perceive significant ROI and risk potential.
Table showing data for securing GenAI tech stack
Image Credits: Forgepoint Capital

Interface layer: Balancing usability with security

Businesses see immense potential in leveraging customer-facing chatbots, particularly customized models trained on industry and company-specific data. The user interface is susceptible to prompt injections, a variant of injection attacks aimed at manipulating the model’s response or behavior.

In addition, chief information security officers (CISOs) and security leaders are increasingly under pressure to enable GenAI applications within their organizations. While the consumerization of the enterprise has been an ongoing trend, the rapid and widespread adoption of technologies like ChatGPT has sparked an unprecedented, employee-led drive for their use in the workplace.

Widespread adoption of GenAI chatbots will prioritize the ability to accurately and quickly intercept, review, and validate inputs and corresponding outputs at scale without diminishing user experience. Existing data security tooling often relies on preset rules, resulting in false positives. Tools like Protect AI’s Rebuff and Harmonic Security leverage AI models to dynamically determine whether or not the data passing through a GenAI application is sensitive.

Due to the inherently non-deterministic nature of GenAI tools, a security vendor would need to understand the model’s expected behavior and tailor its response based on the type of data it seeks to protect, such as personal identifiable information (PII) or intellectual property. These can be highly variable by use case as GenAI applications are often specialized for particular industries, such as finance, transportation, and healthcare.

Like the network security market, this segment could eventually support multiple vendors. However, in this area of significant opportunity, I expect to see a competitive rush to establish brand recognition and differentiation among new entrants initially.

Application layer: An evolving enterprise landscape

Generative AI processes are predicated on sophisticated input and output dynamics. Yet they also grapple with threats to model integrity, including operational adversarial attacks, decision bias, and the challenge of tracing decision-making processes. Open source models benefit from collaboration and transparency but can be even more susceptible to model evaluation and explainability challenges.

While security leaders see substantial potential for investment in validating the safety of ML models and related software, the application layer still faces uncertainty. Since enterprise AI infrastructure is relatively less mature outside established technology firms, ML teams rely primarily on their existing tools and workflows, such as Amazon SageMaker, to test for misalignment and other critical functions today.

Over the longer term, the application layer could be the foundation for a stand-alone AI security platform, particularly as the complexity of model pipelines and multimodel inference increase the attack surface. Companies like HiddenLayer provide detection and response capabilities for open source ML models and related software. Others, like Calypso AI, have developed a testing framework to stress-test ML models for robustness and accuracy.

Technology can help ensure models are fine-tuned and trained within a controlled framework, but regulation will likely play a role in shaping this landscape. Proprietary models in algorithmic trading became extensively regulated after the 2007–2008 financial crisis. While generative AI applications present different functions and associated risks, their wide-ranging implications for ethical considerations, misinformation, privacy, and intellectual property rights are drawing regulatory scrutiny. Early initiatives by governing bodies include the European Union’s AI Act and the Biden administration’s Executive Order on AI.

Data layer: Building a secure foundation

The data layer is the foundation for training, testing, and operating ML models. Proprietary data is regarded as the core asset of generative AI companies, not just the models, despite the impressive advancements in foundational LLMs over the past year.

Generative AI applications are vulnerable to threats like data poisoning, both intentional and unintentional, and data leakage, mainly through vector databases and plug-ins linked to third-party AI models. Despite some high-profile events around data poisoning and leakage, security leaders I’ve spoken with didn’t identify the data layer as a near-term risk area compared to the interface and application layers. Instead, they often compared inputting data into GenAI applications to standard SaaS applications, similar to searching in Google or saving files to Dropbox.

This may change as early research suggests that data poisoning attacks may be easier to execute than previously thought, requiring less than 100 high-potency samples rather than millions of data points.

For now, more immediate concerns around data were closer to the interface layer, particularly around the capabilities of tools like Microsoft Copilot to index and retrieve data. Although such tools respect existing data access restrictions, their search functionalities complicate the management of user privileges and excessive access.

Integrating generative AI adds another layer of complexity, making it challenging to trace data back to its origins. Solutions like data security posture management can aid in data discovery, classification, and access control. However, it requires considerable effort from security and IT teams to ensure the appropriate technology, policies, and processes are in place.

Ensuring data quality and privacy will raise significant new challenges in an AI-first world due to the extensive data required for model training. Synthetic data and anonymization such as Gretel AI, while applicable broadly for data analytics, can help prevent scenarios of unintentional data poisoning through inaccurate data collection. Meanwhile, differential privacy vendors like Sarus can help restrict sensitive information during data analysis and prevent entire data science teams from accessing production environments, thereby mitigating the risk of data breaches.

The road ahead for generative AI security

As organizations increasingly rely on generative AI capabilities, they will need AI security platforms to be successful. This market opportunity is ripe for new entrants, especially as the AI infrastructure and regulatory landscape evolves. I’m eager to meet the security and infrastructure startups enabling this next phase of the AI revolution — ensuring enterprises can safely and securely innovate and grow.

More TechCrunch

iOS 18 will be available in the fall as a free software update.

Here are all the devices compatible with iOS 18

The tests indicate there are loopholes in TikTok’s ability to apply its parental controls and policies effectively in a situation where the teen user originally lied about their age, as…

TikTok glitch allows Shop to appear to users under 18, despite adults-only policy

Lhoopa has raised $80 million to address the lack of affordable housing in Southeast Asian markets, starting with the Philippines.

Lhoopa raises $80M to spur more affordable housing in the Philippines

Former President Donald Trump picked Ohio Senator J.D. Vance as his running mate on Monday, as he runs to reclaim the office he lost to President Joe Biden in 2020.…

Trump’s VP candidate JD Vance has long ties to Silicon Valley, and was a VC himself

Hello and welcome back to TechCrunch Space. Is it just me, or is the news cycle only accelerating this summer?!

TechCrunch Space: Space cowboys

Apple Intelligence features are not available in the developer beta, which is out now.

Without Apple Intelligence, iOS 18 beta feels like a TV show that’s waiting for the finale

Apple released the public betas for its next generation of software on the iPhone, Mac, iPad and Apple Watch on Monday. You can now test out iOS 18 and many…

Apple’s public betas for iOS 18 are here to test out

One major dissenter threatens to upend Fisker’s apparent best chance at offloading its unsold EVs, a deal that would keep the startup’s bankruptcy proceeding alive and pave the way for…

Fisker has one major objector to its Ocean SUV fire sale

Payments giant Stripe has delayed going public for so long that its major investor Sequoia Capital is getting creative to offer returns to its limited partners. The venture firm emailed…

Major Stripe investor Sequoia confirms $70B valuation, offers its investors a payday

Alphabet, Google’s parent company, is in advanced talks to acquire Wiz for $23 billion, a person close to the company told TechCrunch. The deal discussions were previously reported by The…

Google’s Kurian approached Wiz, $23B deal could take a week to land, source says

Name That Bird determines individual members of a species by identifying distinguishing characteristics that most humans would be hard-pressed to spot.

Bird Buddy’s new AI feature lets people name and identify individual birds

YouTube Music is introducing two new ways to boost song discovery on its platform. YouTube announced on Monday that it’s experimenting with an AI-generated conversational radio feature, and rolling out…

YouTube Music is testing an AI-generated radio feature and adding a song recognition tool

Tesla had internally planned to build the dedicated robotaxi and the $25,000 car, often referred to as the Model 2, on the same platform.

Elon Musk confirms Tesla ‘robotaxi’ event delayed due to design change

What this means for the space industry is that theory has become reality: The possibility of designing a habitation within a lunar tunnel is a reasonable proposition.

Moon cave! Discovery could redirect lunar colony and startup plays

Get ready for a prime week of savings at TechCrunch Disrupt 2024 with the launch of Disrupt Deal Days! From now to July 19 at 11:59 p.m. PT, we’re going…

Disrupt Deal Days are here: Prime savings for TechCrunch Disrupt 2024!

Deezer is the latest music streaming app to introduce an AI playlist feature. The company announced on Monday that a select number of paid users will be able to create…

Deezer chases Spotify and Amazon Music with its own AI playlist generator

Real-time payments are becoming commonplace for individuals and businesses, but not yet for cross-border transactions. That’s what Caliza is hoping to change, starting with Latin America. Founded in 2021 by…

Caliza lands $8.5 million to bring real-time money transfers to Latin America using USDC

Adaptive is a platform that provides tools designed to simplify payments and accounting for general construction contractors.

Adaptive builds automation tools to speed up construction payments

When VanMoof declared bankruptcy last year, it left around 5,000 customers who had preordered e-bikes in the lurch. Now VanMoof is up and running under new management, and the company’s…

How VanMoof’s new owners plan to win over its old customers

Mitti Labs aims to transform rice farming in India and other South Asian markets by reducing methane emissions by 50% and water consumption by 30%.

Mitti Labs aims to make rice farming less harmful to the climate, starting in India

This is a guide on how to check whether someone compromised your online accounts.

How to tell if your online accounts have been hacked

There is a general consensus today that generative AI is going to transform business in a profound way, and companies and individuals who don’t get on board will be quickly…

The AI financial results paradox

Google’s parent company Alphabet might be on the verge of making its biggest acquisition ever. The Wall Street Journal reports that Alphabet is in advanced talks to acquire Wiz for…

Google reportedly in talks to acquire cloud security company Wiz for $23B

Featured Article

Hank Green reckons with the power — and the powerlessness — of the creator

Hank Green has had a while to think about how social media has changed us. He started making YouTube videos in 2007 with his brother, novelist John Green, at a time when the first iPhone was in development, Myspace was still relevant and Instagram didn’t exist. Seventeen years later, posting…

Hank Green reckons with the power — and the powerlessness — of the creator

Here is a timeline of Synapse’s troubles and the ongoing impact it is having on banking consumers. 

Synapse’s collapse has frozen nearly $160M from fintech users — here’s how it happened

Featured Article

Helixx wants to bring fast-food economics and Netflix pricing to EVs

When Helixx co-founder and CEO Steve Pegg looks at Daisy — the startup’s 3D-printed prototype delivery van — he sees a second chance. And he’s pulling inspiration from McDonald’s to get there.  The prototype, which made its global debut this week at the Goodwood Festival of Speed, is an interesting proof…

Helixx wants to bring fast-food economics and Netflix pricing to EVs

Featured Article

India clings to cheap feature phones as brands struggle to tap new smartphone buyers

India is struggling to get new smartphone buyers, as millions of Indians don’t go for an upgrade and continue to be on feature phones.

India clings to cheap feature phones as brands struggle to tap new smartphone buyers

Roboticists at The Faboratory at Yale University have developed a way for soft robots to replicate some of the more unsettling things that animals and insects can accomplish — say,…

Meet the soft robots that can amputate limbs and fuse with other robots

Featured Article

If you’re an AT&T customer, your data has likely been stolen

This week, AT&T confirmed it will begin notifying around 110 million AT&T customers about a data breach that allowed cybercriminals to steal the phone records of “nearly all” of its customers. The stolen data contains phone numbers and AT&T records of calls and text messages during a six-month period in…

If you’re an AT&T customer, your data has likely been stolen

In the first half of 2024 alone, more than $35.5 billion was invested into AI startups globally.

Here’s the full list of 28 US AI startups that have raised $100M or more in 2024