The story of how Vercel got breached in April 2026 starts, improbably, with a Roblox cheat.
Somewhere around February of this year, an employee at Context.ai — a company that makes an AI productivity suite — was looking for a game exploit script online. They found something, downloaded it, ran it. What they actually ran was Lumma Stealer, a well-documented infostealer that quietly harvested every credential on the machine: Google Workspace passwords, API keys, Supabase tokens, Datadog credentials, and more.
From that single download, a chain of events unfolded that eventually reached Vercel’s customer environments and ended with a $2 million ransom demand on BreachForums.
What actually happened
By March 2026, the attacker — using credentials stolen from the Context.ai employee — had gotten into Context.ai’s AWS environment. That gave them access to OAuth tokens belonging to Context.ai’s users. Among those users: a Vercel employee who had connected their enterprise Google Workspace account to Context.ai’s browser extension and granted it “Allow All” permissions.
That’s all it took. With that OAuth token, the attacker had legitimate, authenticated access to the Vercel employee’s Google Workspace account. And from there, they moved laterally into Vercel’s internal environments.
Vercel moved quickly. They engaged Mandiant, worked with GitHub, Microsoft, npm, and Socket to confirm that no packages had been tampered with. The npm supply chain remained clean. Sensitive-marked environment variables were not accessed.
The chain of trust problem
None of the individual steps in this attack chain look particularly exotic. And yet, together, they bypassed the defenses of a sophisticated, well-resourced company.
That’s because security isn’t just about the strength of individual links. It’s about the length of the chain.
Jaime Blasco, CTO of Nudge Security, put it simply: “OAuth is the new lateral movement. Until the industry treats OAuth tokens as high-value credentials, we’re going to keep reading the same breach writeup with the vendor names swapped out.”
“Be more careful” doesn’t scale
The speed at which we’re granting access to AI tools has outrun the organizational processes we have for governing that access. Most companies don’t have an OAuth inventory. Most don’t have a process for reviewing new AI tool integrations. They have a security team that writes policies and an engineering team that moves fast, and the gap between those two things is exactly where attacks like this one live.
What you can actually do
- Know what OAuth access you’ve granted. Most identity providers have a page somewhere that shows every third-party app with access to your account. Review it. Revoke anything you don’t actively use or recognize.
- Mark secrets as sensitive. Vercel’s own platform distinguished between sensitive and non-sensitive environment variables. The ones marked sensitive weren’t exposed.
- Think about blast radius, not just prevention. Credentials scoped tightly to what they need, rotation policies that limit the lifetime of any given secret — these don’t stop the breach, but they narrow its consequences.
- Treat AI tool permissions like production credentials. An AI tool with access to your enterprise Google Workspace has access to your email, your Drive, your calendar.
The thing about moving fast
Vercel wasn’t compromised because they were careless. They were compromised because they were part of a long trust chain, and one link in that chain — a game cheat download, on a machine that had nothing to do with Vercel — turned out to be load-bearing.
We think about these questions a lot at VM Farms — not because we have them solved, but because managed infrastructure is, in a lot of ways, a bet on where trust is best placed. If you’re working through your own threat model or want to compare notes on what we’ve learned, we’re easy to reach.