AI/ML, Generative AI, AI benefits/risks

Security awareness training meets a new obstacle: Generative AI

Gen AI impact on awareness training

For as long as email has existed, it’s been one of the most vulnerable attack vectors for organizations. Cybercriminals know that email systems serve up a goldmine of sensitive data and a gateway into the corporate network. They also know that email serves as a launch pad for social engineering attacks such as phishing and business email compromise (BEC) — attacks that prey on the human element, tricking unsuspecting victims into giving away their account credentials, money, or other financial information.

Determined threat actors have become highly skilled at manipulating even the most vigilant employees. In recent years, we’ve seen them evolve from delivering basic “spray and pray” attacks riddled with typos, grammatical errors and other red flags to delivering advanced targeted attacks written in perfect English, and sent from spoofed domains or even legitimate compromised domains.

As a result of this shift, security awareness training (SAT) has risen as a top cyber strategy in many organizations. Security leaders have realized that defense needs to start with their weakest link — their people — and are beginning to invest more in programs that can train employees to accurately identify email threats. According to Cybersecurity Ventures, the security awareness training market was worth $5.6 billion in 2023 and could almost double in value by 2027 to more than $10 billion.

Various studies have proven that SAT programs can effectively lower the cost of phishing attacks on businesses. But this year, we may begin to see a different story, as SAT efficacy comes up against a new obstacle: Generative AI.

How generative AI transformed email threats

When ChatGPT was released in late 2022, it set the digital world into a frenzy—everyone from academics to knowledge workers and everyday consumers tapped the application to get work done faster and smarter. Since then, the Generative AI wave has continued to pick up steam with the launch of additional tools such as  Bing AI, Google Bard, and Claude.

But an unintended consequence of the Generative AI explosion has been the adoption by cybercriminals eager to reap its productivity benefits. Now, even inexperienced and unskilled threat actors can use a tool like ChatGPT (or one of its malicious variations, WormGPT or FraudGPT) to write emails for phishing and BEC attacks more quickly and convincingly.

Not only are cybercriminals able to write emails that are error-free, with a professional tone, and even accurate language translations, they are also weaponizing generative AI to deliver attacks targeted at specific individuals. For example, simply by prompting generative AI with information about their target (like inputting a link to their social media profiles), attackers can send highly personalized and believable lures in greater volumes than ever before.

What it means for security awareness training

The industry has largely understood that phishing attacks were already becoming harder to recognize as cybercriminals increased their social engineering prowess. Now, with Generative AI tools in their arsenals, they’ve only gotten worse. Modern email attacks are becoming increasingly realistic and nearly impossible to distinguish from legitimate communications. Without the presence of traditional attack indicators, SAT’s efficacy drops dramatically.  

SAT programs are still important, as low-level email attacks aren’t going away. Security teams should continue training employees on the telltale signs of a traditional email attack, but should also update these programs to ensure they keep pace with how these threats evolve.

For instance, even if an email gets sent from a legitimate domain free of spelling and grammatical errors, employees should watch for any language requesting sensitive information, especially if the sender instills a sense of urgency. Employees should also learn the proper steps for verification whenever asked to take actions related to financial transactions or account authentication via email. 

SAT should remain an important component of the onboarding process for all new employees, but also revisited regularly for existing employees. Because cybercriminal tactics constantly evolve, organizations should conduct refreshers every four to six months. There are also plenty of tools on the market today that can help automate these training sessions.

SAT should continue on as a core component of a company’s cyber strategy, but it’s not infallible, and having additional layers of security ensures the best possible protection against advanced threats.

In addition to implementing foundational security measures such as multi-factor authentication, password managers, and least privilege, leveraging an email security solution can help to deliver comprehensive detection, especially for those seemingly realistic email attacks that go unnoticed by the human eye.

I am very interested to see how SAT outcomes shift this year, as AI-generated attacks continue to pick up momentum among threat groups. But don’t wait to find out: now’s the time to revisit and update SAT programs, as well as the company’s broader email security strategy.

Mike Britton, chief information security officer, Abnormal Security

Mike Britton

Mike Britton, chief information security officer at Abnormal Security, leads the company’s information security and privacy programs. Mike builds and maintains Abnormal Security’s customer trust program, performing vendor risk analysis, and protecting the workforce with proactive monitoring of the multi-cloud infrastructure. Mike brings 25 years of information security, privacy, compliance, and IT experience from multiple Fortune 500 global companies.

LinkedIn: https://www.linkedin.com/in/mrbritton/

X: https://twitter.com/AbnormalSec

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.