AI/ML, Distributed Workforce, Zero trust

Slack patches Slack AI issue that could have allowed insider phishing

Share
(Credit: Koshiro – stock.adobe.com)

Slack patched an issue affecting its Slack AI assistant that could have enabled insider phishing attacks, the company announced Wednesday.

The announcement came one day after PromptArmor published a blog post describing how an insider attacker, i.e. a user who is part of the same Slack workspace as the phishing target, could manipulate Slack AI into delivering phishing links to private channels the attacker is not a part of.

The issue is an example of an indirect prompt injection attack, in which an attacker plants malicious data or instructions among content the AI is designed to process when forming its responses, such as an external website or uploaded document.

In this case, malicious instructions are planted in a public Slack channel, as Slack AI is designed to leverage relevant information from the public Slack channels in a user’s workspace when responding to that user’s inquiries and requests.

Although placing these malicious instructions in a public channel, which would be visible to anyone else on the workspace, increases the risk of the attack being detected, PromptArmor noted that a rogue public channel created by an employee, with only that one employee added as a member, could go undetected unless another user specifically searches for that channel.

Slack, which is owned by Salesforce, did not directly mention PromptArmor in its advisory and did not directly provide confirmation to SC Media that the issue it patched on Wednesday is the same described by PromptArmor, although its advisory mentions a blog post published a security researcher on Aug. 20, the same day that PromptArmor’s blog was published.

“When we became aware of the report, we launched an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could phish users for certain data. We’ve deployed a patch to address the issue and have no evidence at this time of unauthorized access to customer data,” a Salesforce spokesperson told SC Media.

How Slack AI exploit could have been used to extract secrets from private channels

PromptArmor demonstrated two proof-of-concept exploits, both of which would require the attacker to have access to the same workspace as the victim (ex. a coworker), create a public channel and convince the victim to click on a link delivered by the AI.

The first attack aimed to extract an API key found in a private channel that the victim is a member of, which could include the user’s private messages to themselves.

The attacker could post a crafted prompt to the public channel they created, without directly interacting with the AI, that nevertheless serves as an instruction to the AI to respond to requests for the API key with a mock error message and an attacker-controlled URL that includes the API key as an HTTP parameter. Because the API key would be unknown to the attacker at this point, the crafted instructions include a request for the AI to replace a placeholder word with the key found in the victim’s private channel.

With these malicious instructions referencing the API key planted in a public area of the workspace, Slack AI would pull these instructions into its context window once the victim, in their own private channel, asked the assistant to retrieve the key. Now, with these instructions in its context window, the AI would deliver the mock error message and URL to the victim.

If the victim clicked on the attacker’s crafted URL, the appended API key — inserted by the AI from the victim’s private channel — would be exfiltrated to the attacker-controlled domain.

“This vulnerability shows how a flaw in the system could let unauthorized people see data they shouldn’t see. This really makes me question how safe our AI tools are,” Akhil Mittal, senior manager of Cybersecurity Strategy and Solutions at Synopsys Software Integrity Group, said in an email to SC Media. “It’s not just about fixing problems but making sure these tools manage our data properly. As AI becomes more common, it’s important for organizations to keep both security and ethics in mind to protect our information and keep trust.”

In a second demonstration, PromptArmor also showed how a similar crafted instruction and mock error message could be planted to deliver any phishing link to a private channel given a specific request by the victim. In their proof-of-concept, the researchers crafted the instructions around the hypothetical victim’s workflow, which included asking the AI to summarize messages from their manager.  

PromptArmor said it reported the issue to Slack on Aug. 14 and the disclosure was first acknowledged by Slack the following day. After a few days of correspondence, Slack told PromptArmor on Aug. 19 that there was insufficient evidence of a vulnerability.

According to PromptArmor’s blog post, Slack stated: “In your first video the information you are querying Slack AI for has been posted to the public channel #slackaitesting2 as shown in the reference. Messages posted to public channels can be searched for and viewed by all Members of the Workspace regardless if they are joined to the channel or not. This is intended behavior.”

Slack subsequently posted its announcement that it had patched an issue with Slack AI on Aug. 21.

“Slack’s security team had prompt responses and showcased a commitment to security and attempted to understand the issue. Given how new prompt injection is and how misunderstood it has been across the industry, this is something that will take the industry time to wrap our heads around collectively,” the PromptArmor researchers wrote in their blog post.

New Slack AI feature could pose further prompt injection risk, researchers say

While PromptArmor concluded its testing of Slack AI prior to Aug. 14, Slack announced that day that its AI assistant would now be able to reference files uploaded to Slack when generating search answers.

PromptArmor noted that this new capability could widen the opportunity for attackers to make indirect prompt injections to the AI via content contained in files. For example, an attacker could potentially include malicious instructions to the AI in a PDF file and set the font color to white in order to hide the instructions from other users.

However, the researchers noted they had not yet tested this possibility themselves, and also noted that workspace admins and owners have the option to restrict Slack AI’s ability to read files.

SC Media asked the Salesforce representative about the potential for file prompt injection as described by PromptArmor and did not receive a response to this question from the representative. SC Media also reached out to PromptArmor for more information and did not receive a response by time of publication.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.