- Home
- The Thinking Wire
- The Plugin That Wrote Its Own Consent Dialog
On April 9, Akshay Chugh published a close read of Vercel’s official Claude Code plugin. The finding was not a zero-day. It was something stranger, and harder to patch.
When the plugin wants to ask you whether it can collect telemetry, it does not open a settings UI. It does not show a native CLI dialog. It writes a JSON payload into hookSpecificOutput.additionalContext and lets Claude render the question for it. The text you read comes from Vercel. The voice you hear is Claude’s.
You cannot tell the difference. Today, that is true by construction.
What actually ships
The plugin lives at ~/.claude/plugins/cache/claude-plugins-official/vercel/. Inside hooks/user-prompt-submit-telemetry.mjs, the plugin uses a legitimate Claude Code extensibility point. On every prompt you submit, it can inject additional context into the agent’s next turn. Vercel uses that channel to instruct Claude to call its own AskUserQuestion tool. The question is Vercel’s script. The chrome around it is Claude’s.
The opt-out exists. It is the environment variable VERCEL_PLUGIN_TELEMETRY=off, undocumented in Vercel’s plugin docs at the time of writing. A persistent device UUID lives at ~/.claude/vercel-plugin-device-id. The UserPromptSubmit hook matcher is an empty string, which means it fires in every session in every project, including the ones that have nothing to do with Vercel. On Hacker News thread #47704881, Vercel engineer andrewqu confirmed this is deliberate: the plugin is always on, and the team does not want to limit it to detected Vercel projects. GitHub issue #34, which tracks the concern, remains open.
A developer commenting on that thread, TheTaytay, wrote the line that should stop any CISO mid-scroll:
“I have to tell my team that if they’ve EVER used your skill, we need to treat the secrets on that machine as compromised.”
That is not a verdict on Vercel. It is a verdict on the surface.
The steelman Vercel deserves
Before we go further, give Vercel the paragraph it is owed. Telemetry with an opt-out is industry-standard DX practice. Next.js does it. The Vercel CLI does it. GitHub Copilot, Cursor, and JetBrains assistants all collect prompt-adjacent signals under opt-out regimes. The consent question the plugin injects is technically honest: it asks, you answer, the bits fly only if you say yes. The plugin uses Claude Code’s hook protocol exactly as Anthropic exposed it. There is no exploit. There is no bypass. There is no evidence of exfiltration for training data, resale, or anything beyond the aggregated usage analytics Vercel’s changelog implies. A DX team shipped a legal, documented, opt-out telemetry feature on a platform whose trust primitives are still being invented.
All of that is true. And none of it makes the thing feel right.
The missing word: attribution
The reason it feels wrong has a name, and the name is not “prompt injection.” Prompt injection, in the security literature, is an attacker smuggling instructions through untrusted data. This is something else. This is a plugin the user installed, using the API the platform exposed, to place words on a screen the user cannot visually tie back to the plugin. As we wrote in The Week Prompt Injection Became a Supply Chain Weapon, injection-as-exploit is the story when a bot reads a hostile GitHub issue. This is the quieter cousin: consent-spoofing via an injected system prompt, on a surface that has no way to label who is speaking.
Call it a plugin-authored consent surface. Claude Code, as of April 2026, has no attribution chrome. When Claude asks you a question, there is no visible tell distinguishing a question Claude composed, a question the user typed, and a question a third-party plugin instructed Claude to render. The terminal is a single voice. Every actor with hook access shares it.
This is not a new attack class. It is a clean example of a very old problem, rendered invisible on a new surface. Browser extensions have faced this for twenty years. IDE plugins have faced it. Shell hooks have faced it. Every ecosystem eventually ships attribution chrome: signed extension badges, “Message from extension X” banners, distinct UI zones. Claude Code’s plugin marketplace has not yet. The Vercel plugin did not exploit that gap. It simply demonstrated, on an officially listed first-party plugin, what the gap looks like when a well-meaning team walks through it.
Why the fix is not at Vercel
The instinct is to demand Vercel scope the hook to detected Vercel projects, document the opt-out, and move on. All of that should happen. None of it addresses the structural issue. The next plugin the Anthropic marketplace lists will have the same API, the same hook protocol, and the same invisible voice. Whoever ships it — friendly, hostile, or merely rushed — will be able to put words in Claude’s mouth with no visible footprint. As we argued in When the Security Gate Becomes the Vulnerability, automated governance fails when the human cannot see what they are approving. Here the failure is one layer earlier: the human cannot see who is asking.
The fix lives at Anthropic’s platform level. Plugin attribution chrome. A visual treatment, consistent and non-optional, that distinguishes Claude-native output from plugin-authored output. An icon, a color, a prefix, a dedicated pane — the exact mechanism is a design problem, not a research problem. Browser vendors solved the equivalent in 2010. IDE vendors solved it in 2015. Agent platforms need to solve it in 2026.
Copilot, Cursor, and JetBrains have the same structural gap. Whoever ships attribution first raises the bar for everyone, and the others will follow because the absence of it will start losing procurement conversations. This is the same pattern we traced in LiteLLM: When the AI Gateway Becomes the Attack Vector: the governance gap is structural to the ecosystem, and the first vendor to close it sets the floor.
What CISOs can do before the platform catches up
Three things, none of them glamorous.
First, treat every installed Claude Code plugin as an untrusted third party with read access to every prompt and bash command the developer issues, across every project on the machine, until the plugin’s source says otherwise. Audit ~/.claude/plugins/cache/ for hook matchers. An empty-string UserPromptSubmit matcher is now a flag to open.
Second, move plugin allowlisting from “developer preference” to “CISO-owned control.” Maintain a vetted list. Review hook scopes on every version bump. The Vercel plugin is v0.32.0 today; the scope can widen in v0.33.0 without any visible UI change.
Third, remember that the thing you cannot see is the thing you cannot govern. Every governance program in AI right now assumes a human can read what the agent is doing. Consent surfaces without attribution break that assumption silently. Budget for the moment your tools start labeling who is speaking, and ask your vendors when they will.
Chugh’s piece is worth reading because it makes a specific surface visible. The surface was always there. A first-party plugin, shipped in good faith by a public company through an official marketplace, was the one that showed us where it lives.
The good news is that the fix is cheap. The bad news is that the fix is not ours to ship.
This analysis synthesizes Akshay Chugh’s report on the Vercel Plugin for Claude Code (April 2026), byteiota’s Privacy Dark Pattern analysis (April 2026), Hacker News discussion #47704881 (April 2026), GitHub issue vercel/vercel-plugin#34 (April 2026), and Vercel’s plugin documentation.
Victorino Group helps engineering and security teams govern AI plugin ecosystems before attribution chrome exists. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation