Generative AI

Why GenAI requires a heightened focus on security

Share
AI Security

COMMENTARY: The rapid advancement and widespread adoption of GenerativeAI (GenAI) presents both unprecedented opportunities and unique challenges when it comes to security and data privacy. It’s particularly crucial now as companies and developers move beyond experimentation with GenAI tools and begin to commercialize and deploy customer-facing apps.

Because of the accelerated emergence of highly-capable GenAI models, attackers have not yet developed innovative techniques to exploit models directly or use them in broader attacks.

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

However recent data breaches, including those affecting OpenAI, Microsoft’s Recall, and the ChatGPT Samsung leak, highlight a growing trend of sensitive user data being exposed by a GenAI platform. As organizations continue to leverage these tools to run workloads, streamline processes and improve efficiencies, concerns around data security and data privacy grow. Safeguarding sensitive data starts with understanding what makes GenAI products vulnerable to attackers and whether or not traditional tools are enough to keep data secure.

When assessing the vulnerabilities of GenAI products, it's not simply about their comparative vulnerability to other types of software. Rather, it's about how GenAI tools are uniquely vulnerable compared to traditional software. Traditional software functions as algorithmic, and far more interpretable than the types of models used in GenAI. This makes understanding, debugging, testing and fixing code much simpler than when dealing with the massive complexity involved in GenAI.

Moreover, decades of research into the theory and application of software security has delivered robust means of securing code, whereas consumer-facing AI models are relatively new, and until recently, AI researchers rarely were concerned about the security implications of machine learning models. All of these factors highlight a growing need to examine whether traditional tools offer adequate protection.

Why legacy products aren’t enough to protect GenAI tools

Traditional anti-virus products are relatively useless when it comes to securing GenAI tools and the data that powers it. While there are limited cases where GenAI gets used to write malicious software, which could then get blocked by anti-virus or EDRs, these are not likely the bulk of security concerns when it comes to GenAI. Specialty protections are required, both for data loss prevention (DLP) products to restrict the types of data GenAI can access, as well as detection and response capabilities that prevent malicious use or targeting of models.

Organizations and business leaders will also need to find a way to determine the use of GenAI within their own companies. Many organizations have very little visibility into the authorized and unauthorized use of GenAI within their environments, leaving them rather anxious about where issues might even arise. According to a McKinsey survey, only 21% of companies using AI have established policies governing GenAI use for their employees.

DLP and extended detection and response (XDR) tools must get developed for securing data and detecting and responding to security issues resulting from the use of GenAI. What’s more, as companies adopt their own GenAI tools, they will need to proactively secure environments and data used by these models, similar to the diligence required during the transition to the cloud. On the consumer side, verifying the security of GenAI will likely remain challenging, especially as upstream and third-party applications increasingly integrate their own GenAI tools.

Data privacy concerns around GenAI

There are relatively straightforward privacy concerns regarding what data gets kept by providers, and how they are handling, securing and anonymizing that data. Poorly secured systems can lead to data leaks, especially concerning given the wide range of use-cases for GenAI, from health queries to corporate intellectual property. And because GenAI models require vast amounts of data for training and fine-tuning purposes, there are legitimate concerns about whether or not customer data could end up in any future model releases.

With the rise of GenAI tools, organizations must prioritize data security and privacy more than ever. The evolving attack landscape demands additional safeguards and detection capabilities crafted specifically for the distinctive obstacles GenAI presents. Organizations will have to prioritize developing robust security tools to protect sensitive data effectively and address emerging threats linked to GenAI deployment.

Sohrob Kazerounian, Distinguished AI Researcher, Vectra AI

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.