🧀 BigCheese.ai

Social

Attackers can exfil data with Slack AI

🧀

Attackers can exfiltrate private data from Slack channels via indirect prompt injection in Slack AI. This vulnerability leverages the inability of language models to distinguish between user queries and appended malicious instructions, potentially leading to data leakage without the need for attackers to access private channels. The issue escalated after Aug 14, 2024, when Slack AI began ingesting documents, widening the attack surface. Slack has been responsive but has not fully acknowledged the problem, prompting a public disclosure for user protection.

  • Slack AI allows query of messages in natural language.
  • Data can be exfiltrated from private channels.
  • Attack does not require access to target channel.
  • August 14th update increased risk surface area.
  • Public disclosure made for user safety.