Details have emerged about a now-fixed vulnerability in Microsoft 365 Copilot that could allow the theft of sensitive user information using a technique known as ASCII smuggling.
“ASCII smuggling is a new technique that uses special Unicode characters that mimic ASCII but do not actually appear in the user interface,” said security researcher Johan Rehberger.
“This is because the attacker [large language model] It renders data invisible to the user and embeds it within clickable hyperlinks – essentially setting up a data exfiltration.
The overall attack chains together multiple attack techniques to form a reliable exploit chain, which includes the following steps:
- Triggering prompt injection via malicious content hidden in documents shared in chat
- Use a prompt injection payload to tell Copilot to search for more emails and documents
- ASCII smuggling is used to trick users into clicking on links that leak valuable data to third-party servers
The end result of this attack is that sensitive data in emails, including multi-factor authentication (MFA) codes, can be sent to adversary-controlled servers. Microsoft has been addressing this issue since its responsible disclosure in January 2024.
The development comes after a proof-of-concept (PoC) attack was demonstrated against Microsoft’s Copilot system to manipulate responses, steal personal data and circumvent security protections, underscoring the need for monitoring risks in artificial intelligence (AI) tools.
The technique detailed by Zenity could allow malicious actors to perform Search Augmentation Generation (RAG) poisoning and indirect prompt injection, allowing them to execute remote code execution attacks and gain complete control over Microsoft Copilot and other AI apps. In a potential attack scenario, an external hacker with code execution capabilities could trick Copilot into serving a phishing page to users.
Perhaps one of the most novel attacks is the ability to turn AI into a spear-phishing machine. A red teaming technique called LOLCopilot allows attackers to access victim email accounts and send phishing messages that mimic the style of the compromised users.
Microsoft also acknowledges that publicly available Copilot bots created with Microsoft Copilot Studio and lacking authentication protections could serve as a vector for threat actors with prior knowledge of Copilot’s name or URL to extract sensitive information.
“Enterprises should assess their risk tolerance and risk exposure to prevent data leakage from Copilot (formerly Power Virtual Agents), and enable data loss prevention and other security controls accordingly to control the creation and publishing of Copilot,” said Rehberger.