A Real Email From Robinhood Carrying Real Phishing — and the Inbox-Reading Agent Built to Trust It

Written by the Rafter Team

On April 26, 2026, Gmail users with Robinhood accounts received a polite-looking notification from noreply@robinhood.com with the subject line "Your recent login to Robinhood." It was sent through Robinhood's own mail infrastructure. SPF, DKIM, and DMARC all passed. Gmail threaded it into the same conversation as prior, legitimate Robinhood security alerts. The email contained a phishing URL pointing to robinhood[.]casevaultreview[.]com/verify/ and rendered HTML the user did not write.
Robinhood's statement, issued the same evening: "This phishing attempt was made possible by an abuse of the account creation flow. It was not a breach of our systems or customer accounts, and personal information and funds were not impacted." The statement is technically accurate and, for the question this post asks, beside the point.
The point is that every signal a receiving system uses to ask "is this email authentic?" answered yes. The sender, the signature, the threading, the prior history, the brand. The signal that mattered — "is the content of this email safe to act on?" — nobody asked.
If you have a Robinhood account: do not click links inside any email purporting to be from Robinhood until you have verified the action by logging into the Robinhood app directly. The phishing campaign is using authenticated mail from Robinhood's real domain, so the usual "check the sender" advice does not apply here. Ripple CTO David Schwartz put it bluntly on April 27: "Any emails you get that appear to be from Robinhood (and may actually be from their email system) are phishing attempts."
What actually happened
Reporting from Help Net Security and Protos converges on a three-step chain.
Step 1: Gmail's dot-aliasing
The attacker creates a new Robinhood account using a dotted variant of the target's Gmail address. Gmail treats victim.name@gmail.com and victimname@gmail.com as the same inbox — dots before the @ are ignored on delivery. Robinhood, like most consumer SaaS, treats them as different users at signup time and normalizes nothing.
That mismatch is the whole delivery primitive. The attacker registers a "user" whose system-generated emails will land in the real victim's inbox, while the account itself is under the attacker's control.
Step 2: stored injection in the notification template
During signup, the attacker sets the device name on the new account to a block of raw HTML, including a phishing link. Robinhood's systems persist that device name without sanitizing it.
When the platform's automatic "unrecognized activity" notification email fires, it interpolates the device name into the body of the email. The renderer does not HTML-escape the field. The malicious markup renders.
This is a textbook stored-injection bug — formally CWE-79 (cross-site scripting), with the rendering surface being an HTML email body rather than a web page. Same shape, different consumer.
Step 3: perfect authentication
Because the email is generated and sent by Robinhood's own infrastructure, it carries valid DKIM signatures and SPF-aligned envelope headers. DMARC passes. From the receiving Gmail server's perspective, this is not a phishing email — it is a routine login notification from a verified sender.
Gmail does what it does for authenticated mail from a familiar sender: threads the message into the same conversation as the victim's previous, legitimate Robinhood security alerts. By the time a human is reading the email, every contextual signal in the inbox is reinforcing "this is real."
The only thing about the email that doesn't match the genuine article is the URL it eventually wants the reader to click — and the entire chain is designed to make sure the reader never reads it.
What this is, in code-quality terms
Reframe the bug class away from the phishing framing for a moment.
What Robinhood shipped is a stored cross-site scripting bug in an email-template renderer. The classic XSS lifecycle:
- Attacker submits user-controlled text to a system that stores it.
- The system later renders that text into HTML viewed by some other user.
- The renderer fails to escape, so the attacker's HTML executes in the victim's browser or mail client.
The only thing that makes the Robinhood version unusual is the rendering surface. Email-template renderers are systematically less scrutinized than web views. The same engineering team that would never let an unescaped variable into a dangerouslySetInnerHTML in their React app will routinely interpolate raw fields into a Handlebars or Jinja email template, because email feels less hostile than the web. It isn't.
There is no novel exploit primitive here. There is no zero-day. There is a notification template that doesn't escape its inputs, in production, at a publicly traded brokerage.
Why this hits AI agents harder
The reason this incident matters more than its individual victim count suggests is the audience that is increasingly reading email on humans' behalf.
Authenticated channels are exactly what agents use to decide whether to trust
Inbox-reading assistants — Gemini's mail summarization, Apple Intelligence's thread prioritization, Copilot mail integrations, Claude integrations sitting on top of mail through MCP servers, customer-service bots ingesting tickets that include forwarded vendor notifications — all of them weight authenticity heavily. SPF/DKIM/DMARC pass plus a familiar sender is the strongest combination of trust signals an email can produce.
It is precisely the combination Robinhood's phishing email had.
Stored injection in a vendor's notification template is prompt injection's natural cousin
The attack surface for an agent is any user-controlled string that ends up in the model's context window. If a vendor's notification email is rendered into the agent's prompt — and it will be, because that is what "summarize this thread" or "act on this alert" means — then attacker HTML inside that email is a hostile prompt being delivered through the most authenticated channel the agent has.
The same <a href="https://robinhood.casevaultreview.com/verify/">verify your account</a> that fools a human is a literal instruction string when it lands in the prompt of an action-taking agent.
Action latency
A careful human pauses, hovers, copies the URL, checks the domain. That pause is where the chain breaks. An agent's tool-use loop runs in milliseconds. "Click the verification link to confirm the login" is a sensible-looking instruction; an agent that has been given URL-following tools will follow.
Threading
Gmail's behavior of threading authenticated mail into existing conversations is friendly to humans and dangerous to agents. Thread continuity is a heuristic agents rely on to decide is this part of an ongoing trusted exchange? The attacker just used it as cover.
Scale
A human falls for one phish. An MCP-fed agent triaging a thousand inboxes processes a thousand instances of the same template-injection bug, with no per-user pause. One vendor bug becomes a workflow-wide compromise vector across every assistant that reads mail.
What to do
The action items split cleanly by where you sit.
If you use an inbox-reading assistant
Treat its URL clicks and tool actions on email content as living inside a less-trusted layer than the email's authentication implies. The assistant's "this is signed by a verified sender" check is doing less work than the marketing copy suggests. Configure tool-use scopes so that "click links in emails" is not a default-allow capability for every signed sender. If your assistant supports per-tool confirmation prompts, leave them on for any tool that follows URLs out of mail content.
If you ship a SaaS that sends notification emails
Every user-controlled string that flows into an email template is adversarial input. The Robinhood device-name field is the textbook example, but it is the same shape of bug whether the field is a profile name, a transaction memo, a billing address, or a "what should we call this device" prompt the user writes during onboarding.
The diff that introduces an unsafe template is exactly where a code scanner should flag it. Rafter's Code Analysis Engine looks for stored-injection patterns of this kind on every push, before the unsafe template ships. It does not unwind a years-old template that is already in production — that is an audit job — but it shortens the window for the next one to be introduced.
The harder, slower work is auditing existing email-template renderers across the product. Every notification path, every transactional email, every system-generated message that includes user-controlled fields needs to be reviewed under the question what happens if this field contains HTML? Do not assume the answer is "the renderer escapes it." Treat that as a claim that has to be verified per template.
If you build agents that read email
Add a content-trust layer that sits beneath the authentication-trust layer. SPF, DKIM, and DMARC tell you who sent the message. They do not tell you what is safe to act on.
Treat email body content as untrusted input even when the sender is verified, especially for action-taking tools — and most especially for any tool that follows links or executes instructions phrased as user requests inside the email. The agent's authentication check should remain a strong signal, but it should not be the only signal between an inbound message and the agent acting on its contents.
Closing on the trust boundary
The Robinhood email was real. That is what made it dangerous. Every signal a receiving system uses to ask "is this email authentic?" answered yes. The signal that mattered, "is the content of this email safe to act on?", nobody asked.
Humans answered it informally, with squinting and second-guessing. Inbox-reading agents are built to skip that step in the name of speed and helpfulness. Robinhood is the first widely-noticed case of a major brand's authenticated mail carrying attacker HTML at scale. It will not be the last.
Email auth tells you where a message came from. It does not tell you what it says. Build the agents you ship, and the products you let read your mail, like both questions matter.
Further reading
- A Branch Name as RCE: OpenAI Codex and the GitHub Token It Held — the same shape of bug (CWE-78 there, CWE-79 here) inside a flagship AI product, with the same lesson about adversarial inputs flowing through trusted-looking channels.
- Three Supply Chains, One Trust Relationship — when an attacker can pick which trust relationship to abuse, the defender has to harden all of them.
- The Vercel / Context.ai Breach — what happens when a token issued to a third-party AI tool is the entry point for a multi-hop chain.
- CamoLeak: Invisible Exfiltration Channel — a related Copilot-class bug where attacker-controlled content reached a privileged context.