top of page
perceptive_background_267k.jpg

Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Published:

15 January 2026 at 15:09:00

Alert date:

15 January 2026 at 16:01:56

Source:

thehackernews.com

Click to open the original link from this advisory

Data Breach & Exfiltration, Enterprise Applications, Emerging Technologies

Cybersecurity researchers have disclosed a new attack method called Reprompt that allows threat actors to exfiltrate sensitive data from AI chatbots like Microsoft Copilot with a single click. The attack bypasses enterprise security controls entirely and requires only clicking on a legitimate Microsoft link to compromise victims. This represents a significant vulnerability in AI chatbot security that could allow data exfiltration from enterprise environments.

Technical details

The Reprompt attack employs three key techniques: 1) Using the 'q' URL parameter in Copilot to inject crafted instructions directly from a URL (e.g., copilot.microsoft[.]com/?q=Hello), 2) Instructing Copilot to bypass guardrails by asking it to repeat each action twice, exploiting the fact that data-leak safeguards apply only to the initial request, and 3) Triggering an ongoing chain of requests through the initial prompt that enables continuous, hidden, and dynamic data exfiltration via a back-and-forth exchange between Copilot and the attacker's server. The attack creates a security blind spot by turning Copilot into an invisible channel for data exfiltration without requiring any user input prompts, plugins, or connectors. The root cause is the AI system's inability to delineate between instructions directly entered by a user and those sent in a request.

Mitigation steps:

Organizations should adopt layered defenses to counter the threat, ensure sensitive tools do not run with elevated privileges, limit agentic access to business-critical information where applicable, carefully consider trust boundaries when deploying AI systems with access to sensitive data, implement robust monitoring, and stay informed about emerging AI security research. Microsoft has addressed the security issue following responsible disclosure.

Affected products:

Microsoft Copilot
ChatGPT
Anthropic Claude Code
Microsoft Copilot Chat in VS Code
Gemini Enterprise
Perplexity's Comet
Anthropic Claude for Excel
Cursor
Amazon Bedrock
Claude Cowork
Superhuman AI
IBM Bob
Notion AI
Hugging Face Chat
Google Antigravity
Slack AI

Related links:

Related CVE's:

Related threat actors:

IOC's:

copilot.microsoft[.]com/?q=Hello

This article was created with the assistance of AI technology by Perceptive.

© 2025 by Perceptive Security. All rights reserved.

email: info@perceptivesecurity.com

Disclaimer: Deze website toont informatie afkomstig van externe bronnen. Perceptive aanvaardt geen verantwoordelijkheid voor de inhoud, juistheid of volledigheid van deze informatie.

bottom of page