AI/ML, Generative AI

Five security risks from Generative AI

Share
Five AI security risks

We’re all still coming to grips with the exciting possibilities of Generative AI (GenAI) as a technology that can create realistic and novel content, such as images, text, audio, and video. Its use cases will span the enterprise and can enhance creativity, improve productivity, and generally help people and businesses work more efficiently. We are at a point in time where no other technology will transform how we work more drastically than AI.

However, GenAI also poses significant cybersecurity and data risks. From seemingly innocuous prompts entered by users that may contain sensitive information (which AI can then collect and store), to building large-scale malware campaigns, generative AI has nearly single-handedly expanded the ways modern enterprises can lose sensitive information.

For most LLM companies, they are just now starting to consider data security as part of their strategy and customer needs. Businesses must adapt their security strategy to accommodate this, as GenAI security risks are revealing themselves as multi-faceted threats that stem from how users inside and out of the organizations interact with the tools.

What we know so far

GenAI systems can collect, store, and process large amounts of data from various sources – including user prompts. This ties into five primary risks organizations face today:

  • Data leaks: If employees enter sensitive data into GenAI prompts, such as unreleased financial statements or intellectual property, then enterprises open themselves up to third-party risk akin to storing data on a file-sharing platform. Tools such as ChatGPT or Copilot could also leak that proprietary data while answering prompts of users outside the organization.
  • Malware attacks: GenAI can generate new and complex types of malware that can evade conventional detection methods – and organizations may face a wave of new zero-day attacks because of this. Without purpose-built defence mechanisms in place to prevent them from being successful, IT teams will have a difficult time keeping pace with threat actors. Security products need to use the same technologies at scale to keep up and stay ahead of these sophisticated attack methods.
  • Phishing attacks: The technology excels at creating convincing fake content that mimics real content, but contains false or misleading information. Attackers can use this fake content to trick users into revealing sensitive information or performing actions that compromise the security of the business. Threat actors can create new phishing campaigns – complete with believable stories, pictures and video – in minutes, and businesses will likely see a higher volume of phishing attempts because of this. Deep fakes are being produced to spoof voices for targeted social engineering hacks and have proven very effective.
  • Bias: LLM’s can become biased in their responses and potentially give misleading or wrong information back out of models that were trained with bias information.
  • Inaccuracies: We’ve also seen that LLMs can accidently deliver the wrong answer when analysing a question because of a lack of human understanding and full context of a situation

Prioritize data security

Mitigating the security risks of generative AI broadly centers around three pillars: employee awareness, security frameworks, and technology.

Educating employees on the safe handling of sensitive information is nothing new. But the introduction of generative AI tools to the workforce demands consideration of the inevitable accompanying new data security threats. First, businesses must ensure employees understand what information they can and can’t share with AI-powered technologies. Similarly, we have to make people aware of the increase in malware and phishing campaigns that may result from GenAI.

The way businesses are operating has become more complex than ever – and that’s why securing data wherever it resides is now a business imperative. Data continues to move from traditional on-premises locations to cloud environments, and people are accessing data from anywhere, and trying to keep pace with various regulatory requirements.

Traditional data loss prevention (DLP) capabilities have been around forever and are powerful for their intended use cases, but with data moving to the cloud, DLP capabilities also need to move while extending abilities and coverage. Organizations are now embracing cloud-native DLP – prioritizing unified enforcement to extend data security across important channels. This approach streamlines out-of-the-box compliance and promises enterprises industry-leading cybersecurity wherever data resides.

Leveraging data security posture management (DSPM) tools also allows for further protection. AI-powered DSPM products enhance data security and protection by quickly and accurately identifying data risk, empowering decision-making by examining data content and context, and even remediating risks before attackers can exploit them. This prioritizes essential transparency about data storage, access and usage so that companies can assess their data security, identify vulnerabilities and initiate measures to reduce risk as efficiently as possible.

Platforms that combine innovations like DSPM and DLP into a unified product that prioritizes data security everywhere are ideal – bridging security capabilities wherever data exists.

Successful implementation of generative AI can significantly boost an organization’s performance and productivity. However, it’s vital that companies fully understand the cybersecurity threats these new technologies can introduce to the workplace. Armed with this understanding, security pros can take the necessary steps to reduce their risk with minimal business impact.

Jaimen Hoopes, vice president, product management, Forcepoint

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.