Network Security, AI benefits/risks, AI/ML

How AI can make security more proactive and less reactive

Share
The end of legacy SIEMs

In November 2022, the wider world suddenly became aware of the power and potential of artificial intelligence as ChatGPT was made available to the general public.

Yet information-security practitioners were already familiar with automation and machine learning, which they had been using for many years in the forms of security orchestration, automation and response (SOAR) and static and dynamics application security testing (SAST and DAST) tools.

The addition of AI that can learn from its own mistakes and incorporate experiences into its learning model promises to greatly accelerate cybersecurity processing and implementation, as well as reinforce defenses against new attack techniques that are also using AI.

It will let security teams more easily spot and mitigate potential problems before they can be exploited, shifting the workload from reactive to more efficient proactive tasks.

The AI speed boost

Automation has already had an effect on network security, especially when it comes to cloud-based assets and cloud-based security tools. Thanks to automation, new virtual servers can be spun up in the cloud in just a few minutes, and virtual firewalls can be configured just as quickly.

To add new features or abilities to on-premises network devices, you might have to take them offline to install software updates, or even replace the devices entirely at considerable cost. With cloud-based network security tools, however, you can easily and quickly install newer versions of the same software you've already licensed.

"If you need to make a change, you don't actually log on to the servers," said Robert Lowry, vice president of security at The Lifetime Value Company. "You just deploy the new change. Tear down and replace. Same thing with patching. Don't patch, replace — you spin up something fresh."

An even greater boost to network security comes with the assistance that automation provides to teams sifting through gigabytes of logs and other data searching for anomalies and signs of threats. AI will speed up that process as it learns from its own experiences and constantly adds new data to its training model. A recent survey of CISOs and other technology-focused executives found that 70% planned to use AI in cybersecurity and threat detection.

Better detection, fewer false positives

Aviv Abramovich, head of security services product management at network-security provider Check Point, said AI and ML tools had dramatically improved the threat-detection capabilities of the company’s products, thanks to AI's pattern-recognition abilities.

"Using machine learning and deep learning, we were able to increase our accuracy in detecting and preventing new threats and reducing false positives considerably," said Abramovich. "We had 90% less false positives and 40% to 50% more detecting more malicious activity than before."

He added that AI's real advantages lie in its ability "to identify the things that are really malicious versus the one that might look like it or look just abnormal, but they're actually not malicious."

On a less analytical side, Check Point has introduced what it calls Infinity Copilot, an AI-based assistant that receives instructions in natural language and helps security teams and administrators see where overlapping/conflicting policies may exist.

Copilot and similar AI assistants can provide guidance and suggestions for mitigation, create proposed policies, and troubleshoot issues for quick ticket resolution. They will also boost team productivity and relieve stress on staff. 

"We are now also able to use AI not only for better threat prevention, but also for security administrative tasks," Abramovich says. "We have an AI assistant that you can ask to do all sorts of things. You can actually write to it in human language."

The beginnings of an AI arms race

Of course, AI can aid attackers as well. A Check Point blog post from early January 2023, six weeks after ChatGPT became generally accessible, detailed how low-skilled coders were already using the AI tool to create information-stealing malware, ransomware and phishing emails.

A study released in December 2022 by the Finnish telecommunication authority Traficom predicted "a new arms race that exploits cutting-edge AI technologies for both attack and defense."

It also foresaw accelerated timelines of software patching as AI quickly exploits disclosed threats, and deprecation of voice authentication and other biometric methods as deepfakes abound.

Scariest of all, the study expects that by 2027 or 2028, there might be AI-powered malware that can independently change attack strategies and alter its own code to evade defenses, "living off the land" by using AI code libraries built into PCs instead of carrying its own learning models.

"Winning this new arms race will boil down to attackers and defenders vying to adopt new AI advances first," the Traficom report said. "New security solutions will have to leverage AI advances before attackers do."

Don't forget the human element

One question about AI is whether it will generate anything new, or just speed up what humans already do, for better or for worse.

"I don't think AI is bringing any net benefits. Quite the opposite for cybersecurity, at least for the foreseeable future," said Julian Mihai, CISO of Penn Medicine. "It's much easier to exploit a bug than to prevent it from being exploitable throughout your entire environment. So there's an asymmetry of efforts there."

David Sinclair, founder and CEO of 4FreedomMobile, said that AI has its limits, and that human supervision will continue to be necessary to interpret and act up what AI finds — and what AI might recommend.

"AI is going to have an enormous impact in terms that it's going to automate a lot of the work both for attack and for defense," he says. "But you've got to have a human head at the top that's getting the big picture. Because AI, at least at the level of development that AI is at today, it's really good at identifying the trees. It's not good at seeing the forest."

However, Check Point's Abramovich was more upbeat about AI's potential to augment rather than replace human intuition and intelligence, especially when it comes to cybersecurity.

"What AI is doing is generating its own intelligence from the intelligence that already exists," he said. "The importance of using AI is to create more intelligence to help you do things better."

Paul Wagenseil

Paul Wagenseil is a custom content strategist for CyberRisk Alliance, leading creation of content developed from CRA research and aligned to the most critical topics of interest for the cybersecurity community. He previously held editor roles focused on the security market at Tom’s Guide, Laptop Magazine, TechNewsDaily.com and SecurityNewsDaily.com.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.