


Perceptive Security
SOC/SIEM Consultancy

Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot
Published:
15 januari 2026 om 15:09:00
Alert date:
15 januari 2026 om 16:01:56
Source:
thehackernews.com
Data Breach & Exfiltration, Enterprise Applications, Emerging Technologies
Cybersecurity researchers have disclosed a new attack method called Reprompt that allows threat actors to exfiltrate sensitive data from AI chatbots like Microsoft Copilot with a single click. The attack bypasses enterprise security controls entirely and requires only clicking on a legitimate Microsoft link to compromise victims. This represents a significant vulnerability in AI chatbot security that could allow data exfiltration from enterprise environments.
Technical details
The Reprompt attack employs three key techniques: 1) Using the 'q' URL parameter in Copilot to inject crafted instructions directly from a URL (e.g., copilot.microsoft[.]com/?q=Hello), 2) Instructing Copilot to bypass guardrails by asking it to repeat each action twice, exploiting the fact that data-leak safeguards apply only to the initial request, and 3) Triggering an ongoing chain of requests through the initial prompt that enables continuous, hidden, and dynamic data exfiltration via a back-and-forth exchange between Copilot and the attacker's server. The attack creates a security blind spot by turning Copilot into an invisible channel for data exfiltration without requiring any user input prompts, plugins, or connectors. The root cause is the AI system's inability to delineate between instructions directly entered by a user and those sent in a request.
Mitigation steps:
Organizations should adopt layered defenses to counter the threat, ensure sensitive tools do not run with elevated privileges, limit agentic access to business-critical information where applicable, carefully consider trust boundaries when deploying AI systems with access to sensitive data, implement robust monitoring, and stay informed about emerging AI security research. Microsoft has addressed the security issue following responsible disclosure.
Affected products:
Microsoft Copilot
ChatGPT
Anthropic Claude Code
Microsoft Copilot Chat in VS Code
Gemini Enterprise
Perplexity's Comet
Anthropic Claude for Excel
Cursor
Amazon Bedrock
Claude Cowork
Superhuman AI
IBM Bob
Notion AI
Hugging Face Chat
Google Antigravity
Slack AI
Related links:
https://www.varonis.com/blog/reprompt
https://www.radware.com/blog/threat-intelligence/zombieagent/
https://checkmarx.com/zero-post/turning-ai-safeguards-into-weapons-with-hitl-dialog-forging/
https://noma.security/blog/geminijack-google-gemini-zero-click-vulnerability/
https://www.lasso.security/blog/red-teaming-browsesafe-perplexity-prompt-injections-risks
https://news.ncsu.edu/2025/10/ai-privacy-hardware-vulnerability/
https://unit42.paloaltonetworks.com/model-context-protocol-attack-vectors/
https://www.promptarmor.com/resources/cellshock-claude-ai-is-excel-lent-at-stealing-data
https://www.ox.security/blog/two-clicks-to-1m-how-attackers-can-drain-enterprise-budgets-through-ai-platforms/
https://www.promptarmor.com/resources/claude-cowork-exfiltrates-files
https://www.promptarmor.com/resources/superhuman-ai-exfiltrates-emails
https://www.promptarmor.com/resources/ibm-ai-(-bob-)-downloads-and-executes-malware
https://www.promptarmor.com/resources/notion-ai-unpatched-data-exfiltration
https://www.promptarmor.com/resources/huggingface-chat-exfiltrates-data
https://www.promptarmor.com/resources/google-antigravity-exfiltrates-data
https://www.promptarmor.com/resources/data-exfiltration-from-slack-ai-via-indirect-prompt-injection
https://labs.zenity.io/p/connected-agents-the-hidden-agentic-puppeteer
Related CVE's:
Related threat actors:
IOC's:
copilot.microsoft[.]com/?q=Hello
This article was created with the assistance of AI technology by Perceptive.
