Vercel 2026: How an AI Tool Handed an Attacker Your Keys

TL;DR:
- A Vercel employee installed an AI app (Context.ai) with "Allow All" OAuth scopes on their corporate Google Workspace — two months later, an attacker was inside Vercel's infrastructure
- Environment variables not marked as
sensitivewere stored in plaintext and accessible via API: OpenAI keys, database URLs, internal tokens — all exposed - The attacker was active for two months before Vercel's first public statement. A customer noticed their own leaked API key on April 10th; Vercel confirmed it on the 19th
- This isn't a story about Vercel being careless. It's a story about the real cost of adopting AI tools without a trust model
Two months.
The attacker was inside Vercel's infrastructure for approximately two months before the company published its first public statement. Before Vercel said anything publicly, a customer had already received an OpenAI alert warning that their API key had been leaked — a key that, according to the customer himself, only existed in Vercel.
Spoiler: this didn't start with a CVE. There was no zero-day in Next.js. No SQL injection in the dashboard. There was an employee who installed an AI tool and gave it permissions it should never have had.
Welcome to the threat model of the AI tool ecosystem in 2026.
The Attack Chain: From a Game Cheat Site to Production Secrets
Let's reconstruct the timeline as documented in Vercel's official bulletin and independent analyses from Trend Micro and BleepingComputer.
February 2026: An employee at Context.ai — an AI platform — downloads software from a game cheats website. Bundled with that software came the Lumma Stealer trojan, an infostealer that threat groups have favored for years due to its ability to exfiltrate browser-stored credentials, session cookies, and specifically, OAuth tokens.
The malware exfiltrated the employee's Google Workspace tokens. Up to this point, this was Context.ai's problem.
March 2026: A Vercel employee had installed Context.ai's AI Office Suite using their corporate Google Workspace account. The detail that turns this into a disaster: when authorizing the app, they selected the "Allow All" scope. Maximum permissions. No restrictions.
With the OAuth token compromised at Context.ai, the attacker inherited full access to the Vercel employee's Google Workspace account. No password. No MFA. Just a token nobody had revoked.
From there, the adversary pivoted into Vercel's internal systems. Vercel described the attacker as "highly sophisticated" based on their speed of movement and apparent prior knowledge of the system.
March – April 2026: The attacker enumerates environment variables not marked as sensitive across a subset of customer projects. In Vercel, those variables were stored in plaintext and retrievable via the platform's API using any valid token with the right permissions.
API keys. Database URLs. Internal tokens. All in cleartext.
April 10th: Customer Andrey Z. receives an OpenAI alert: one of his API keys was detected as leaked. The key only existed as an environment variable in Vercel. A warning signal that arrived before the official disclosure.
April 19th: Vercel publishes its official security bulletin. CEO Guillermo Rauch confirms the attack chain via X (formerly Twitter). Mandiant is brought in for forensic response.
The Problem Nobody Wanted to Name: Insecure Secrets by Default
I'll be direct, because the Trend Micro report says this clearly and it deserves more attention than it got.
In Vercel, environment variables have two modes: sensitive (encrypted, value not retrievable via API) and... everything else. Everything else is stored in plaintext and can be read by any token with project access.
The design assumed users would know how to distinguish sensitive from non-sensitive data. Spoiler: users don't do that. They never have. OpenAI API keys, database credentials, Stripe tokens — everything ends up in "regular" variables because it's more convenient, because nobody notices, because it always worked that way.
The attacker didn't need to exploit a code vulnerability. They just called Vercel's API with a valid token and politely asked: what are the values of these variables? And the API answered.
# What the attacker did (simplified):
curl -s -H "Authorization: Bearer $COMPROMISED_TOKEN" \
"https://api.vercel.com/v11/projects/$PROJECT_ID/env" | jq .
# What they got:
# {
# "key": "OPENAI_API_KEY",
# "value": "sk-...",
# "sensitive": false
# }
After the incident, Vercel changed the default behavior: new environment variables are now automatically marked as sensitive. Good. But as always: the secure architecture arrived after the incident, not before.
60 Days: The Dwell Time Problem
Dwell time — how long an attacker stays inside a system undetected — is one of the most critical indicators in a security incident. The global average hovers around 16 days according to Mandiant's M-Trends report. The Vercel attacker was active for approximately 60 days.
Why so long?
Because enumerating environment variables looks like normal traffic. There's no reverse shell. No obvious lateral movement. There are API calls to project management endpoints — exactly what legitimate tools do all the time. The anomalous pattern — speed and volume of enumeration — was eventually what raised internal alerts. But by then the damage was done.
This is what makes this vector particularly dangerous: it doesn't trigger traditional detectors. No malware is running on Vercel's servers. No mass file exfiltration. Just a valid OAuth token calling legitimate API endpoints, reading data that was sitting there, waiting to be read.
In the real world, late detection isn't a failure of monitoring tools. It's a direct consequence of designing systems where legitimate access and malicious access are indistinguishable.
The "Allow All" Trap: OAuth as a Persistent Attack Vector
I need to stop here because this is the heart of the problem, and it's a problem that will repeat itself.
The AI tool ecosystem in 2026 works like this: you install a tool, it requests OAuth permissions over your Google (or GitHub, or Slack) account, you click "Allow" without reading the scopes, and you get on with your life. The tool works. Everything is fine.
Until it isn't.
The problem with OAuth isn't the protocol itself — it's well-designed. The problem is how humans interact with it and how third-party applications request permissions. "Allow All" shouldn't exist as an option in a corporate environment. Minimum necessary scopes for the app's function is what should be requested, and what security teams should require as a prerequisite for approval.
But we live in a world where employees install AI tools with the same philosophy they use to install productivity apps on their personal phones: fast, useful, no questions asked.
The result: every AI app connected to a corporate account is a potentially persistent attack vector. If that app gets compromised, if its vendor has a breach, if one of their employees falls for a phishing attempt or downloads software from a game cheats site — that OAuth token remains valid. The attacker inherits everything the app was authorized to access.
Zero trust doesn't end at your perimeter. It ends at every third party holding tokens with access to your systems.
The Real Cost of "Moving Fast with AI"
Here comes the uncomfortable part.
Context.ai, the platform that was the entry point, is a legitimate AI tool for teams. Useful. Well-regarded. Adopted by a Vercel employee who wanted to be more productive. That's exactly what's supposed to happen at a modern company.
The problem isn't that AI was used. The problem is that AI was used without evaluating the risk model of that integration.
The pressure to adopt AI tools in 2025-2026 is real and has a name: competitiveness. Teams that don't use AI fall behind. Managers demand adoption. Employees look for tools that make them more productive. And in that context, asking someone to complete a security questionnaire before installing an AI app feels like bureaucratic friction, not a reasonable precaution.
But the SaaS/AI supply chain is the new supply chain attack vector. Every third-party tool with OAuth access to corporate systems is a link in the chain. And attackers know it.
The Vercel incident isn't an exceptional case. It's a preview. The attack surface of the AI tool ecosystem is only going to grow.
What To Do Now: Pragmatic Checklist
I'm not going to give you a 47-item best practices list nobody will read. Here's what actually matters:
1. Audit Your Corporate OAuth Integrations Today
In Google Workspace, go to Admin Console → Security → API controls → App access control. Filter by third-party applications. Revoke anything that isn't explicitly approved. If you don't know what an app does, revoke it.
For the Vercel incident specifically, the compromised OAuth Client ID was 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj. If you see it in your console, revoke it.
2. Mark Your Environment Variables as Sensitive
In Vercel, all variables containing secrets must be marked as sensitive. The UI now does this by default for new variables, but audit your existing ones:
# List your current variables
vercel env ls --environment=production
# For non-sensitive variables holding secrets, recreate them
vercel env rm VARIABLE_NAME production
vercel env add VARIABLE_NAME production
# Mark as sensitive in the prompt
3. MFA Everywhere, No Exceptions
Enable multi-factor authentication on Vercel, Google Workspace, and all associated repositories. The compromised OAuth token was enough to bypass password-based authentication, but a properly implemented second factor adds a friction layer that matters — especially for account recovery flows and admin actions.
4. Principle of Least Privilege for Integrations
When authorizing a third-party app, review the scopes it requests. If it asks for "Allow All" and you can't justify why it needs full access, don't authorize it. If the vendor doesn't offer granular scopes, consider whether you actually need that integration.
5. Monitor Credential Leak Alerts
Services like HaveIBeenPwned, GitHub's secret scanning alerts, and leaked credential notifications from OpenAI and other providers are real threat intelligence signals. Customer Andrey Z. knew something was wrong on April 10th because OpenAI told him. Those alerts need to be configured and they need to go to someone who reads and acts on them — not an email distribution list where they go to die.
Vercel's Response: What They Did Right
To be fair — and in security, being fair matters — Vercel's disclosure handling was exemplary, according to SecurityWeek and multiple independent analysts.
They hired Mandiant the same day. Notified law enforcement. Published IOCs (Indicators of Compromise) in their bulletin. Communicated directly with potentially affected customers without waiting to have every detail nailed down. Published four updates between April 19th and 23rd, refining information as the investigation progressed.
They changed the default behavior of environment variables so new ones are sensitive going forward. Audited that npm, Next.js, Turbopack, and the rest of their packages were not compromised (GitHub and npm confirmed no unauthorized changes).
They also found a small additional number of accounts compromised independently of the main incident — possibly via social engineering or separate malware — and reported it on April 22-23 with the same transparency.
Where did Vercel fall short? In not designing with sensitive as the default from the start, and in the detection lag. But those are common architectural failures that many platforms share. The post-incident response was solid.
What This Incident Tells Us About the Future
The Vercel incident is an identity supply chain attack. No code was compromised. No npm package was modified. The vector was an employee who installed a legitimate AI tool with excessive permissions.
This is the pattern we're going to see repeat itself. AI tools need access to data and systems to be useful. That access is granted via OAuth, API tokens, workspace integrations. Those integrations become the attack surface.
The question isn't whether you should use AI tools in your company. The question is: do you have a process for evaluating the risk of each integration before approving it?
If the answer is "employees can install whatever they want," you have work to do.
A supposed threat group called "ShinyHunters" claimed to be selling Vercel data for $2 million, allegedly including employee records, internal tokens, and GitHub/NPM access. Vercel refuted these claims — particularly the npm compromise assertion, which was independently confirmed as false. But the fact that someone was attempting to monetize the breach while it was still unfolding tells you something about the threat landscape.
Trust nothing. Verify everything. And when an AI tool asks for "Allow All" — especially on a corporate account — that's the right answer: no.
Sources
- Vercel April 2026 Security Incident | Vercel Knowledge Base
- The Vercel Breach: OAuth Supply Chain Attack | Trend Micro
- Vercel Employee's AI Tool Access Led to Data Breach | Dark Reading
- Vercel Breach Tied to Context AI Hack | The Hacker News
- Vercel confirms breach as hackers claim to sell stolen data | BleepingComputer
- Security Update — Context.ai
- Vercel Breached via Context AI Supply Chain Attack | OX Security
- Vercel says some of its customers' data was stolen | TechCrunch
Nick Holmes Senior Engineer. Trust the code, verify everything else. 🔐