From a Roblox Cheat to a Vercel Breach: The Context.ai Chain

Written by the Rafter Team

If you have a Vercel account, you almost certainly received a polite email this week explaining that there was an incident, that Vercel does not currently believe your credentials were compromised, and that you should review your activity log and rotate environment variables anyway. The wording is calm. The story underneath it is not.
Vercel was breached in April 2026. The compromised entry point was not Vercel's infrastructure, and it was not Vercel's code. It was Context.ai, a third-party AI tool that one Vercel employee was using. Reporting links Context.ai's earlier compromise (which appears to have occurred in February) to a Lumma stealer infection on one of its employees, with the infection vector reportedly traced to downloads of Roblox "auto-farm" exploits and executor scripts on a machine with sensitive access privileges.
Read the chain end-to-end. Reported initial vector: a Roblox cheat download. From there: a Lumma stealer infection on a Context.ai employee, then access to Context.ai's internal systems, then an OAuth-pivoted takeover of a Vercel employee's Google Workspace account, then access to Vercel's internal environments, and finally access to a subset of customer environment variables that were not marked as "sensitive." A multi-hop supply chain that ends at a major developer platform — and starts at a kid's game exploit.
If you are a Vercel customer: review your project activity logs for suspicious access, rotate any environment variables that contain secrets but were not marked as "sensitive," and mark genuinely sensitive variables as sensitive going forward. The scoped advice in Vercel's bulletin is correct; the rest of this post is the bigger picture.
What Vercel actually said happened
Vercel's security bulletin and the reporting around it line up on the technical facts. An attacker compromised Context.ai, used that access to take over a Vercel employee's Google Workspace account, and from there reached internal Vercel environments. Once inside, the attacker had access to environment variables stored on the platform that were not flagged as "sensitive" — variables that, on Vercel, can be retrieved as plaintext through normal dashboard or API access. Variables explicitly marked sensitive use a different, protected storage path and were not exposed.
The number of affected customers is, by Vercel's own description, "quite limited." Vercel reached out to that subset directly and asked them to rotate credentials immediately. Everyone else got the calm email. Vercel has engaged Mandiant and is working with Microsoft, GitHub, npm, and Socket. CEO Guillermo Rauch has stated that a supply chain analysis found no evidence of compromise to Next.js, Turbopack, or other Vercel open-source projects.
A threat actor using the ShinyHunters persona has claimed responsibility on a leak forum, with an asking price of $2 million for the data, though some ShinyHunters members have publicly denied involvement. A leaked sample reportedly included a file with records for 580 Vercel employees — names, email addresses, account activity timestamps. That suggests the attacker reached more than just customer environment data.
The Roblox part is not a joke
The temptation when reading the chain is to laugh at the Roblox detail and move on. The actual lesson is the opposite. The chain is exactly as ridiculous as it sounds, and it is exactly the kind of chain that compromises serious infrastructure now.
Per the reporting, a Context.ai employee — someone with sensitive access privileges — downloaded auto-farm scripts and executor binaries for Roblox on a machine that had access to production-adjacent systems. Auto-farm scripts and Roblox executors are a well-known vector for Lumma stealer; this is not an obscure threat. The infection allegedly harvested credentials and tokens from that employee's machine, and those credentials were then used to compromise Context.ai's internal systems. Context.ai held OAuth-scope access to the Google Workspace accounts of customers who had integrated it. One of those customers was a Vercel employee. That OAuth token was the bridge into Vercel's environments.
The reason this matters is structural. Every AI tool, copilot, or integration that an employee at any vendor in your supply chain uses with privileged access is, transitively, a piece of your attack surface. The pathways are concrete: OAuth grants, transitive trust between SaaS tools, accumulated developer-tool access tokens. The Roblox cheat is funny. The structural fact that one downloaded executable rippled through three companies in ten weeks is not.
"Non-sensitive" environment variables are not safe
The other technical detail worth pulling out is the sensitive-vs-non-sensitive split on Vercel.
Vercel offers a "sensitive" toggle on environment variables. Variables marked sensitive use a protected storage path that prevents them from being read back through the dashboard or API — they are only available at deploy time, inside the build environment. Variables that are not marked sensitive can be retrieved as plaintext through normal dashboard or API access — by the user, and as we now know, by anyone who compromises Vercel's internal systems with sufficient privilege.
In practice, many teams leave variables on the default (not sensitive) because they look the variables up frequently, share them across environments, or never thought about it. API keys, database URLs, signing secrets, and OAuth client secrets all routinely end up in non-sensitive variables. Every one of those is now potentially exposed for the affected customer subset, and is a piece of hygiene worth tightening for every other Vercel customer regardless of whether they were affected.
The right rule going forward is simple: if the variable would be a problem if posted on a public Pastebin, mark it sensitive. The bar is not "is this technically a secret" — it is "would I rotate this if it leaked." Apply that bar to your environment configuration today, not when the next incident lands.
What this means for Rafter customers (and everyone else)
There are two distinct lessons here, and they have different audiences.
For application code: secrets in your repository are a different problem from secrets in your platform's environment variables, but they are the same category of problem — credentials that exist in plaintext somewhere they shouldn't be retrievable from. Pre-commit and pre-merge secrets scanning, which is one of the things Rafter does, catches the in-code variant before it lands in version control. It does not catch what you've stored in Vercel's UI, which is why the second lesson exists.
For the broader supply chain: every AI assistant, copilot, integration, or "smart tool" that touches your code or your environment is now a vendor in your supply chain — with the same threat model implications as any package you depend on. The Vercel breach happened because one Vercel employee had OAuth-granted Context.ai access to their Workspace account, and Context.ai got compromised. That chain is not unique to Vercel or Context.ai. It is the default shape of how modern engineering teams work.
Audit what AI tools your team has granted account-level access to, and ask the same questions of those vendors that you ask of any other dependency: what is their security posture, who do they grant access to internally, and what does their incident response look like. The vendors that answer those questions confidently are the ones worth trusting. The vendors that don't are worth treating as a known risk.
What to do this week
If you are a Vercel customer, three concrete actions in priority order:
- Check your team's Activity Log. In the Vercel dashboard, go to vercel.com/dashboard/activity (or click "Activity" in your team's dashboard). This is the chronological feed of every event on your team: logins, deploys, environment-variable changes, integration connects and disconnects, domain changes, team-member invites and removals — each entry shows who did it, the event type, and the time (hover on the time for the exact timestamp). Because Context.ai's own compromise is reported to have started in February, scan from early February through today and look for anything that does not match your team's normal pattern: a team member you didn't invite, a removal no one requested, an integration connected by someone other than the person who set it up, a domain disconnected without a ticket, an environment-variable change nobody claims, or activity attributed to "Context.ai" or another third-party tool name at an hour no one was working. If anything matches, force sign-out for the implicated account, rotate its credentials, and rotate every secret it had access to.
- Audit your environment variables. In each project, open Settings → Environment Variables. For every variable that holds a real secret (API key, database URL, auth token, signing key, OAuth client secret), flip the "Sensitive" toggle on. For any variable that was holding a secret without that toggle before today, assume it could have leaked if you were in the affected subset — generate a new value at the source (a new Stripe key in Stripe's dashboard, a new database password at your DB provider, etc.) and update the Vercel value to the new one.
- Revoke unused third-party app access. Open myaccount.google.com/permissions (Google's "Third-party apps with account access" page) and github.com/settings/applications (GitHub's authorized OAuth apps). Revoke anything you don't actively use — especially AI tools, browser extensions, and one-off integrations you signed up for once and forgot. Whatever remains on those lists is part of your supply chain; treat it accordingly.
The Vercel bulletin is being updated as the investigation proceeds; it is the canonical source of customer-specific guidance: https://vercel.com/kb/bulletin/vercel-april-2026-security-incident.
The bigger picture
The 2025–2026 cycle of supply chain incidents looks increasingly like a shift one layer up the stack. The 2024 generation was about compromised npm and PyPI packages — your dependencies turning hostile. A growing pattern in 2026 looks more like compromised AI tools and integrations — your workflow turning hostile. The Trivy compromise last month, the axios incident in March, and now the Vercel / Context.ai chain are different in surface but the same in shape: an attacker finds the highest-leverage tool in your engineering workflow and compromises its maintainer or its employees, not yours.
The defense is structural, not heroic. Pin what you can. Mark sensitive what should be sensitive. Inventory what has access to what. And recognize that the perimeter of your attack surface now extends through every OAuth grant, every authorized integration, and every transitive trust relationship you have given a vendor's tooling — all the way down to the security posture of that vendor's own workstations.
A multi-hop chain from a Roblox cheat to a major developer platform is a long way. It is also looking like the new normal.