AI/ML, Email security, AI benefits/risks

How AI can make it less frustrating for employees to report malicious emails

AI and Email

Email remains the top cybersecurity risk, accounting for $51 billion in exposed losses since 2013, according to the FBI Internet Crime Complaint Center. Why? Because threat actors know that human users are the weakest link in the security chain, easily exploitable through deceptive social engineering tactics, and as a result, they are doubling down on attacks such as phishing, business email compromise, and vendor fraud. Last year, BEC alone increased by 108% in volume.

Many organizations have invested in security awareness training tools to combat the growing email threat, assigning educational modules to teach employees how to identify email threats and offer best practices, including how to report suspicious emails.

While these programs are well-intentioned, they are often a double-edged sword for security teams. While we want users to report suspicious emails, an increase in security awareness training often results in a situation where employees start reporting everything – even when suspected messages are entirely safe.

The security team then has to (often manually) manage a seemingly endless queue of employee-reported emails. Manual triage, remediation, and response are time-consuming and can also lead to a poor user experience and missed opportunities for continued security coaching for employees, who rarely understand why the email they reported was not considered malicious.

How can security teams continue to strive toward creating a cyber-savvy workforce, without draining the resources of their security teams? The answer lies in AI.

Automate the user-reported email workflow

The user-reported phishing email workflow has become one of the most operationally inefficient processes that security teams grapple with today, requiring a number of tedious steps.

Once a user reports a message and a help desk ticket gets created, teams must review and triage the reported messages, classify the reported email, scour the email environment for emails from the same sender/within the same campaign that may have reached others in the organization, and remediate those messages. Then, after a full analysis, they must manually respond to each reporter letting them know the submission outcome.

While this works in theory, it’s a much different scenario in practice, when hundreds or even thousands of emails are submitted each day. Cybercriminals often launch phishing attacks in large volumes, hoping to increase the odds that an unwitting employee will fall for their lure. Because AI excels at handling repetitive tasks, this makes the user-reported phishing workflow a natural use case. There’s a huge opportunity for security analysts to automate more of this process, using AI to support the triage and analysis of every user-reported email.

By emulating the investigation process that a human would undergo to identify suspicious characteristics, AI can similarly inspect and judge user-reported emails as malicious, spam, safe, or a phishing simulation – often with even higher accuracy. For a malicious or spam email, AI-assisted investigation could also identify similar messages across all mailboxes, and even bulk-remediate messages across multiple tenants. This can significantly free-up valuable analyst time for more critical tasks like investigating complex threats and conducting proactive threat hunting.

Improve security awareness

Additionally, traditional user-reported phishing workflows typically lack a feedback loop for both employees and security teams. Because of time constraints, security teams cannot always send a message back to the reporter informing them of the outcome of their submission. 

Not only does this miss a valuable opportunity to engage employees in continued awareness training – and especially customized training that corresponds to their specific email – it also limits visibility into the effectiveness of the overall process. Employees are left in the dark about the outcome, which can leave them feeling unmotivated to report suspicious emails in the future. Sending template-based responses is a good alternative, but still doesn’t go far enough toward influencing security awareness in a meaningful way.

Of course, this confronts security teams with the same dilemma: how do we engage employees in thoughtful and personalized education around reported emails, but without draining limited resources?

Fortunately, we’re now living in a new era of chatbots and digital personal assistants, increasingly driven by generative AI. In the same way that consumers and commercial brands can interact through auto-generated, human-like replies, what if security analysts and their employees could do the same?

Imagine the following scenario: an employee notices and reports a suspicious email. In a matter of minutes, they receive an automated response that confirms the email was safe, and also explains why it appeared suspicious and offers additional context about its validity. The response adopts the company’s brand voice, using a friendly and casual tone, and issues links to relevant training resources.

And it doesn’t have to stop there. Just like the two-way AI conversations we can have with digital customer service agents at an online retailer, the same underlying generative AI technology could let employees respond directly to these AI security analysts. Whether it’s to request more information about the email’s classification, required next steps, or related resources, a generative AI tool could mimic a conversation similar to that of a real human security analyst. All this saves security teams time and offers a better end-user experience – a win-win all around.

At a time when email attacks continue to rise and employees struggle to stay ahead of evolving threat trends and indicators, there’s a major opportunity to overhaul current email security operations processes. By bringing AI into the user-reported phishing workflow, security teams can enhance resource efficiency and mitigate burnout, and also introduce new opportunities for personalized and targeted coaching.

The AI technologies needed to drive these use cases already exist. By simply integrating AI tools within their workflows, security teams can reap the same powerful benefits that we’ve seen in AI assistants across countless other use cases in our day-to-day lives.

Mike Britton, chief information security officer, Abnormal Security

Mike Britton

Mike Britton, chief information security officer at Abnormal Security, leads the company’s information security and privacy programs. Mike builds and maintains Abnormal Security’s customer trust program, performing vendor risk analysis, and protecting the workforce with proactive monitoring of the multi-cloud infrastructure. Mike brings 25 years of information security, privacy, compliance, and IT experience from multiple Fortune 500 global companies.

LinkedIn: https://www.linkedin.com/in/mrbritton/

X: https://twitter.com/AbnormalSec

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.