AI/ML, Generative AI, Network Security

How to securely deploy GenAI applications at scale

Share
Business investment and AI artificial intelligence data analysis technology. Businessman and robot future investor, stock market, forex, and crypto currency finance investment.

Widespread use of large-language models and other forms of generative AI (GenAI) presents security risks that many organizations may not be aware of. Fortunately, networking and security providers are finding ways to counter these unprecedented threats.

As AI becomes more embedded into daily business processes, it runs the risk of exposing proprietary information, delivering erroneous answers, or flat-out "hallucinating" non-existent facts.

These pitfalls must be countered by new policies and procedures that can ensure human supervision of AI assistants — as well as supervision and training of humans who don’t understand the risks of using AI.

"If you think about the [AI] networking demand and the security risk, it increases with the more integration you do into your business data and your business processes," Ken Rutsky, Chief Marketing Officer at Aryaka, told SC Media. "Think about an AI application that might integrate directly into a ticketing system, take the ticket and route it based on the intelligence that the model is trained in. Now that application is directly integrated with the business process. And that creates more risk, because obviously, if that model is hacked, it can mess with the business process."

Just training a large-language model (LLM) AU on proprietary information creates a large risk of intellectual-property leakage, as the AI may divulge trade or corporate secrets as part of an otherwise innocuous answer. Likewise, exposing the AI to sensitive personal information may inadvertently cause the AI to reveal those details publicly.

"One of the things these LLM GenAI engines do is they find things that humans can't," Rutsky said in an in-house Aryaka webcast. "And if you think about exposing those discoveries to people who shouldn't have access to it, it's like they may know more about your business than you do."

Guardrails are crucial

AI needs guardrails. Rutsky believes the most important one is to control what the AI has access to.

"You need to have the right policy articulated around access control, around data privacy," he said. "Good security practices around access control and data protection are the first prerequisite as you integrate these applications more into processes."

Aryaka's security models around Gen AI also stress data loss prevention, but Chief Product Officer Renuka Nadkarni thinks that the term should be expanded to cover a bit more.

"There's a new term called knowledge leakage prevention, which I absolutely love, because with AI it is not just about your credit-card information or your PII sensitive data," she said during the in-house webcast. "It's really about your intellectual property."

Threat protection for AI

The third leg of Aryaka's AI-security approach is threat protection, thwarting the different types of malicious attacks directed at AI models — which Nadkarni says are often new twists on old tricks.

"Just like with every technology that can be used, it can be abused," she said. "There are similar, but different vectors of attacks that we see. Typically, there was SQL injection back in the day — now there's a command-drop injection. There is data poisoning, and we know about how the DNS was poisoned. It's on similar lines. These attacks are very difficult to identify."

"Threat protection is a big area, and OWASP Top Ten attacks actually have identified some of the emerging categories in the AI security space," Nadkarni added. "So that's another area that needs to be taken care of."

In a recent SC Media webcast, Nadkarni explained how the "one-pass" packet-inspection process in Aryaka's Unified SASE as a Service offering quickly routes and secures AI network traffic.

"Once you open up the packet stream, you do everything," she said. "You do QoS, you do compression, you do WAN optimization, then you take it for access control. You do access control based on user, application, location, all kinds of URL, reputation, DNS, domain, all of those pieces. Then you do threat protection on the same thing with detection, signature match, anomaly, protocol anomaly, then you do CASB, kind of matching for any kind of a DLP."

Security AND efficiency

At the end of the day, Nadkarni told SC Media's Mandy Logan in an interview at RSA Conference 2024, it's all about balancing security and efficiency.

"If I put in many security products, my performance is going to go down," Nadkarni said. "If I try to simplify my change-management, rule-control process, I'm actually gonna make it probably less secure. If I try to make it faster, it is actually compromising the processes. So there is performance, agility, security, all these different trade-offs."

Paul Wagenseil

Paul Wagenseil is a custom content strategist for CyberRisk Alliance, leading creation of content developed from CRA research and aligned to the most critical topics of interest for the cybersecurity community. He previously held editor roles focused on the security market at Tom’s Guide, Laptop Magazine, TechNewsDaily.com and SecurityNewsDaily.com.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.