Security Program Controls/Technologies

Five hopes and fears every CISO has for AI

Share
Five hopes and fears for AI.

For almost a century, artificial intelligence (AI) has been depicted in our media. Starting with Fritz Lang’s 1927 film, "Metropolis," and through major blockbusters like The Terminator series, "2001: A Space Odyssey," and "Her," these movies have all included or focused on AI’s potential impact. From a mechanical assistant that can help accomplish any task we don’t want to do ourselves, to a doomsday scenario where humans have created an AI robot that threatens to end our world as we know it, these films and many others show our greatest hopes and fears for AI.

Fictional films aside, widespread adoption and use of AI has escalated today. Businesses and consumers alike are leveraging AI daily, and all predictions expect AI to cause massive disruption and transformation to the way we work and live for years to come.

Gartner predicts  the market for AI software will reach almost $134 billion by 2025. For cybersecurity, the expanded use of AI has been an exciting prospect, one filled with plenty of potential, tempered by concerns around how it may compromise data. Today, CISOs have plenty of hopes and fears about what will come next. Here are three hopes and two fears that top my list:  

Hope: AI will reduce tedious work.

Many aspects of daily work involve tedious, manual, and time-consuming processes. For CISOs and security pros, these tasks are often focused on achieving and proving compliance: figuring out how to respond to security questionnaires, documenting evidence, and notifying colleagues to take action. There’s almost always too much to do in not enough time: too many vulnerabilities to prioritize and remediate, too many open-source libraries that need updating, too much data to sort through, and so on.

We can now hope that AI will reduce some time-consuming and tedious tasks, or better yet, completely automate them, giving cybersecurity professionals more time back in their days to focus on what will make the most impact. By leveraging neural networks and knowledge graphs built on each company’s software stack and business requirements, AI offers the possibility of building customized security programs that get stronger over time, without requiring the painstaking human interventions common today.

Hope: AI will predict risks, breaches, and failures before they occur.

AI excels at pattern recognition. If we can apply this to monitor and raise concerns in cybersecurity programs, CISOs hope that it can bring a predictive element to Governance, Risk, and Compliance (GRC). We can then anticipate a system failure before it ever happens, suggest policies and controls that the team may need for a future compliance framework, or identify a risk that the team has not yet noticed. Most products available today deliver alerts when a problem has already occurred; getting ahead of potential issues would benefit GRC professionals who are chronically pressed for time.

Hope: AI will better communicate GRC’s business impact.

CISOs are always looking for better ways to communicate the impact of GRC programs on overall business health. The metrics used by cybersecurity professionals: time spent with auditors, time spent on security questionnaires, SLAs met for security reviews, number of risks remediated, number of employees meeting device security requirements – don’t always translate with the C-Suite and board. With AI programs, CISOs hope to create compelling quantitative and qualitative analyses that can showcase to leadership how they protect revenue, reduce liability, and earn new business with the backing of trust and transparency that comes with a successful and mature GRC program.

Fear: Proprietary data leaks.

Most AI-powered tools available today rely on some combination of proprietary, open-source, and third-party training data and models. Leaders, including CISOs, who have been tasked with protecting first-party and customer data are alarmed that this information could end up in the public domain and used by AI models to generate results. If that happens, proprietary data can then get used as part of results that these AI models generate for other users.

Fear: AI will produce too much generic information at the expense of accurate information specific to my business.

Cybersecurity leaders rely on information to accurately represent their business to demonstrate credibility and earn trust. As more and more companies embrace AI across a variety of functions, skepticism about the accuracy and authenticity of AI-generated results come into question. AI cannot overtake and become solely responsible for managing cybersecurity and implementing GRC workflows. CISOs and cybersecurity leaders must validate the control and management of cybersecurity. These are the people who have the expertise to assess the results from the AI models.

We still don’t know how AI will impact cybersecurity. Today, we hope it will make everyone’s jobs easier and support the goal of protecting organizations from cyber threats. By staying positive and focusing on the hopes and not the fears, AI can automate tedious tasks, recognize patterns, and identify errors and issues, helping organizations improve cyber hygiene dramatically in the years ahead.  

Sravish Sridhar, founder and CEO, TrustCloud

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.