Generative AI

Why security awareness training needs to modernize

Share
AI and security awareness training

Despite increased investments in security training, successful email attacks are on the rise. In fact, business email compromise (BEC) attacks increased by 108% between 2022 and 2023 alone. So why isn’t security awareness preventing more incidents?

Unfortunately, traditional security education doesn’t keep users engaged. Every workday, employees open their devices to a barrage of emails, direct messages, and meeting-packed calendars. In the wake of looming deadlines and overloaded schedules, employees often push security to the back-burner or else speed through mandatory training initiatives. Meanwhile, email attacks grow in sophistication, and as cybercriminals refine their social engineering tactics, it’s become harder to identify potential threats.

An organization’s workforce stands an essential line of defense against cybercrime, but expecting annual training to suffice — and expecting employees to seek out security education in their own limited time — can have grave consequences.

It’s time to change our approach to security awareness training (SAT) to reflect the reality of our rapidly evolving threat landscape -- and the new tools we have available.

Generative AI raises the stakes

Generative AI has become easy for nearly anyone to access and use — including cybercriminals. Threat actors are leveraging tools like ChatGPT, and even malicious versions like WormGPT and FraudGPT, to quickly create unique content free of grammatical errors, unnatural language, and other red flags email users have come to associate with suspicious emails.

Weaponized Generative AI also makes it easier for even the most unskilled threat actors to carry out sophisticated social engineering attacks. By inputting personal details they’ve found through a simple search or linking to a target’s social media account, cybercriminals can generate personalized messages that breeze through spam filters and fool even the most security-conscious user.

For years, security education focused on teaching employees common warning signs of phishing emails and how to report threats to their security teams. While there’s value in helping employees develop a keen eye for suspicious communications and empowering them to take an active role in staving off an attack, it’s not enough. The sheer volume of threats enabled by Generative AI, combined with the fact that this technology makes email attacks nearly undetectable to the human eye, means SAT programs have become significantly less effective.

Modernize security awareness training with a two-way model

Despite the proliferation of Generative AI-assisted attacks, SAT programs are still critical to an organization’s security strategy. They offer a good foundation of security knowledge and help foster a culture where employees care about security and view it as a shared responsibility, rather than merely the job of the security team. That said, SAT as we know it today does not do the job in this new era of highly sophisticated BEC attacks and constantly evolving threats.

Traditional security education makes us too reliant on an employee’s ability to retain everything they’ve learned in annual training sessions and carve out time for continuing their security education. Even in an organization with a strong culture of security, staying up-to-date on emerging threats usually ranks low on a non-security professional's priority list. That’s not because they don’t care, but because self-training take time out of the day, and most employees aren’t engaging with the security team enough for those topics to remain top-of-mind.

Additionally, even when employees take time to report suspicious emails, busy security teams often either fail to close the loop on whether the reported email is a legitimate threat or send an impersonal canned response. This reduces an employee’s likelihood of reporting emails in the future, and it also misses a valuable opportunity to recognize employees for supporting security efforts, reinforce good habits, and educating them on new threats.

As we evolve security education, we need to stop putting the responsibility entirely on the user. Instead, we need to rally around two-way models that engage employees through real-time discussion, giving users a way to ask questions and strengthen their security knowledge as learning opportunities arise.

It's critical to deliver this engagement “just-in-time.” The security team should offer an explanation immediately after a user reports a suspicious email, letting them know whether it was malicious, along with security best practices for the specific threat in question. This helps facilitate learning at the moment when employees are most tuned into threat awareness, and also offers immediate support and encouragement to continue exercising vigilance and reporting.

Of course, responding to every user immediately and with custom responses requires a massive amount of resources that today’s security teams simply don’t have. In the same way that cybercriminals are leveraging Generative AI to scale the creation of targeted content, what if security teams could do the same?

Generative AI could make it possible for security teams to efficiently respond to all reported emails immediately with customized messages outlining why an anomaly is a threat, or why it isn't a threat. Customizations could incorporate the company’s specific security protocols and resources, or even its brand voice. Going a step further, conversational AI capabilities could allow for chatbot-like conversations that quickly address an employee’s follow-up question, engaging them in an ongoing dialogue about the reported threat.

In these ways, generative AI gives security teams the power to transform threat reporting into a valuable learning experience, helping employees build their security awareness and empowering them to continue their role in defending organizations against cyberattacks.

There’s no denying cybercriminals’ strategies are evolving faster than traditional security training can keep up. While SAT programs are a bedrock for building a culture of security, they have limited impact — particularly for organizations that aren’t doing more to keep employees engaged with security between training sessions. However, implementing two-way support models and leveraging Generative AI builds on the foundation SAT programs lay, ensuring employees have the information and encouragement they need to deliver the best defense.

Mike Britton, chief information security officer, Abnormal Security

Mike Britton

Mike Britton, chief information security officer at Abnormal Security, leads the company’s information security and privacy programs. Mike builds and maintains Abnormal Security’s customer trust program, performing vendor risk analysis, and protecting the workforce with proactive monitoring of the multi-cloud infrastructure. Mike brings 25 years of information security, privacy, compliance, and IT experience from multiple Fortune 500 global companies.

LinkedIn: https://www.linkedin.com/in/mrbritton/

X: https://twitter.com/AbnormalSec

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.