Guest | Contact Us | Print Page | Sign In
News Blog
Blog Home All Blogs
Search all posts for:   

 

View all (361) posts »
 

Reprompt: When AI Assistants Become Data Thieves

Posted By Genady Vishnevetsky - Stewart Title, 5 hours ago

[Reposted from ALTA Forum with permission]

You click a link that appears to be from Microsoft. It takes you to Copilot, the legitimate site. But behind the scenes, your AI assistant quietly packages your username, location, and recent conversations and sends them to a hacker. You close the tab thinking you're done, but the data theft continues in the background.

This actually happened. Researchers at Varonis identified a vulnerability in Microsoft Copilot Personal, called "Reprompt," that enabled silent data theft via a single click. Microsoft patched it on January 26, 2026, but this attack reveals how AI tools can be weaponized in ways most of us would never suspect.

Here's how it worked.
 
Attackers created links that pointed to the legitimate copilot.microsoft.com website. Hidden within the web address was additional text, typically used to pre-fill search fields, that injected malicious instructions as soon as the page loaded. Think of it this way. Imagine someone sending you a link to your company's internal system, but the link secretly contains hidden commands that run automatically when you open it. You land on the real system, but those commands are already running without your knowledge.

What made Reprompt particularly clever was its double-request technique. Copilot has built-in safety controls that block suspicious requests. So, attackers instructed the AI to make every request twice. The first attempt would trigger the safety check and get blocked. The second attempt, identical to the first, would sail through undetected.

Once inside, the attack worked in stages. First, it extracted your username. Then your location. Then, any personal information Copilot had learned about you. Finally, it summarized your recent conversations. All the malicious commands originated on the attacker's computer after that initial click, so they weren't visible in the original link.

 

Takeaways:

  • Hover before you click on AI tool links. Look for unusually long web addresses or extra characters after question marks. When in doubt, type the address yourself instead of clicking

  • Question any text that appears automatically. If you arrive at an AI tool and there is already a question or command typed in the input box that you did not write, stop immediately. Close the tab rather than letting it run

  • Notice unusual AI behavior. Your AI assistant making requests you did not ask for, or accessing files and information without your prompting, signals that something is wrong. Exit the session completely if this happens

  • Enable automatic updates. Microsoft fixed this specific vulnerability, but similar techniques could emerge in other AI assistants. Set your tools to update automatically so you are protected as patches are released

  • Use the direct approach. Access AI tools the same way you access your bank: type the address yourself or use a trusted bookmark. Never click links to these tools from emails or messages, even from people you trust whose accounts might be compromised
So what could have prevented this attack? Not clicking the link in the first place. Even though it pointed to Microsoft's website, the hidden text in the URL contained instructions you never intended to execute.

 

Genady Vishnevetsky
Chief Info Security Officer
Stewart Title Guaranty Company

This post has not been tagged.

Permalink | Comments (0)
 
Contact Us

120 Broadway, Suite 945
New York, NY 10271

212. 964. 3701

info@nyslta.org

Our Mission

The New York State Land Title Association, Inc. advances the common interests of all those engaged in the business of abstracting, examining, insuring titles, and otherwise facilitating real estate transactions. The Association promotes the business and general welfare of its Members and protects real property title holders’ ownership rights.