- Home
- The Thinking Wire
- The Collina Paradox: When the AI Evangelist Triggers the Backlash
The Collina Paradox: When the AI Evangelist Triggers the Backlash
On March 15, 2026, Matteo Collina published “Software Engineering Splits in Three.” Collina is not a casual observer. He sits on the Node.js Technical Steering Committee. He created Fastify, one of the most widely deployed Node.js frameworks. He is CTO of Platformatic. His essay argued that AI is splitting software engineering into three distinct tiers, each with different economics and skill requirements.
Three days later, his own pull request to Node.js core became the flashpoint for a petition to ban AI-generated code from the project entirely.
PR #61478 added a virtual filesystem module to Node.js. It contained 21,373 lines of additions across 120 files, developed with Claude Code assistance. Fedor Indutny, a Node.js TSC Emeritus member, responded by launching a petition titled “No AI Code in Node.js Core.” Within days, over 90 developers signed it, including Kyle Simpson (author of “You Don’t Know JS”), Andrew Kelley (creator of the Zig programming language), and Jan Lehnardt (CouchDB PMC chair).
The easy reading is hypocrisy: the man who says AI will restructure engineering submits AI-assisted code and gets told to stop. But the easy reading is wrong. What happened is more instructive, and the lesson applies far beyond open source.
The Thesis That Sparked the Fire
Collina’s essay proposes three tiers of software engineering emerging from AI adoption.
Tier 1 is tech companies. Senior engineers review AI-generated output at scale. The company invests in tooling that makes this review efficient. The engineer’s value is judgment, not production.
Tier 2 is large enterprises. They cannot hire enough senior engineers to build everything in-house. Instead, they build platforms with guardrails and bring in fractional senior expertise when needed. AI handles the volume; humans handle the risk.
Tier 3 is small businesses. A new role Collina calls the “software plumber” emerges. Local developers use AI to build custom solutions for businesses that previously could not afford software. Think of the accountant who also builds your inventory system.
The framework is plausible. It maps to real trends in how organizations consume software engineering talent. And it echoes arguments we have examined before. As we explored in The Phase Shift in Software Engineering, the bottleneck has already shifted from writing code to directing and reviewing it. Collina’s tiers describe what happens when that shift plays out across different organizational scales.
But Collina’s essay has an undisclosed context that matters. Platformatic, his company, sells exactly the kind of platform-with-guardrails service that Tier 2 describes. His sponsorship disclosures list both OpenAI and Anthropic. The essay is consistent analysis, not paid promotion. Still, readers deserve to know that the author’s business model maps neatly onto the tier he describes as the largest market opportunity.
The PR That Proved the Problem
PR #61478 is instructive not because it is bad code. It is instructive because it exposes a structural problem that no open-source project has solved.
The pull request has accumulated 249 review comments and remains open two months after submission. That review burden is the real story. Whether the code is good or bad is secondary to the fact that a single contributor, assisted by AI, generated a volume of changes that consumed significant reviewer time from a volunteer maintainer pool.
This is the tension Collina’s own essay describes but does not resolve. In Tier 1, senior engineers review AI output at scale. But Node.js core is not a Tier 1 organization. It is a volunteer project. The reviewers are not paid to review. They donate their time. When one contributor can produce 21,000 lines in a single PR, the economics of volunteer review break.
Collina’s position is internally consistent: he believes AI-assisted development requires senior human judgment. His PR was submitted for exactly that judgment. The petition signatories are not arguing that Collina is wrong about needing human review. They are arguing that the review burden of AI-assisted contributions exceeds what a volunteer project can absorb.
Both sides are correct. That is what makes this a governance problem, not a moral one.
What the Petition Actually Says (and What It Bundles In)
The petition makes several arguments. Some are strong. Some are not. Treating them as a monolith, as most coverage has done, misses the real debate.
The strong arguments. AI-generated code shifts costs from the author to the reviewer. In a volunteer project, this is a resource allocation problem with no market mechanism to correct it. Reviewers cannot bill for their time. They cannot refuse to review without abdicating their maintainer role. The code arrives, and someone has to look at it.
Reproducibility is another legitimate concern. If a contributor uses an LLM to generate code, can another contributor reproduce the reasoning? Traditional code review assumes the author can explain their decisions. AI-assisted code introduces a new question: did the author make this decision, or did the model?
The weaker arguments. The petition also raises ethical sourcing of training data and environmental impact. These are real concerns in the abstract. They are not specific to Node.js core contributions. Bundling them with the operational arguments dilutes the petition’s force. A petition about reviewer burden and code provenance is actionable. A petition that also wants to adjudicate the ethics of model training is trying to solve too many problems at once.
The Identity Layer
As we analyzed in The Identity Problem: Why Developers Resist AI Tools, developer resistance to AI runs deeper than Luddism. The resistance is an identity crisis rooted in how craft communities define competence and trust. The petition’s signatory list is a case study in this dynamic.
Kyle Simpson built his reputation on deep understanding of JavaScript’s mechanics. Andrew Kelley built Zig specifically as a reaction to the complexity and opacity of existing systems languages. These are developers whose professional identities are constructed around mastery of deterministic systems. AI-generated code is, by nature, stochastic output. It violates the epistemic foundation their careers are built on.
This does not make their objections invalid. It means their objections carry both technical substance and identity weight. Separating the two is essential for productive resolution. The technical argument (reviewer burden, reproducibility) can be addressed with governance. The identity argument (what counts as engineering) cannot be resolved by policy. It resolves over time, or it does not.
What Open Source Has Not Built
The deeper issue is that open-source governance was designed for a world where the bottleneck was writing code. Contribution guidelines, code review processes, commit sign-off requirements: all of these assume that producing a contribution requires significant effort, and that this effort acts as a natural filter on contribution volume and quality.
AI removes that filter. A contributor can now produce in hours what previously took weeks. The governance mechanisms downstream of that production (review, discussion, integration) have not scaled to match.
QEMU banned AI-generated contributions outright in 2025. The Linux kernel now requires “Assisted-by” tags for AI-assisted commits. FreeBSD and Gentoo are debating their own policies. Each project is inventing its own answer to the same question.
None of them have built what is actually needed: a governance framework that accounts for asymmetric production and review costs. Banning AI code is one answer. Requiring disclosure is another. Neither addresses the fundamental economics: when producing code becomes cheap and reviewing it remains expensive, the system breaks unless you either increase review capacity or throttle production volume.
Collina himself opened an issue at the OpenJS Foundation (issue #1509) back in June 2025 asking about AI-assisted development policies. The foundation had no answer then. Nine months later, the question arrived in the form of a 21,000-line PR and a petition. The governance vacuum did not cause the conflict, but it ensured the conflict had no institutional path to resolution.
The Nonna Papera Problem
Collina uses an analogy in his writing: his grandmother (Nonna Papera) making pasta with a machine. The machine did not make her less of a cook. It made her more productive. AI coding tools, he argues, are the same.
The analogy works for productivity. It breaks on provenance. Nonna Papera’s pasta machine did not learn its technique by ingesting every pasta recipe ever published without the authors’ consent. The training data question has nothing to do with whether AI tools are useful. The question is whether the outputs carry obligations that current legal and social frameworks have not resolved.
This is the weakest part of the petition’s argument (because Node.js maintainers cannot resolve training data ethics) and simultaneously the strongest part of the broader cultural resistance (because the question is genuinely unresolved). Collina’s analogy sidesteps it. The petition overreaches on it. Neither engages with it honestly.
Three Lessons for Every Organization
The Collina paradox is not unique to open source. Every organization adopting AI-assisted development will face some version of this conflict. The specifics differ, but the structure is identical.
Lesson one: governance must precede adoption. Node.js had no policy for AI-assisted contributions when the PR landed. The absence of policy did not prevent the contribution. It prevented a productive response. Organizations that wait for the conflict to arrive before building governance will find that the conflict shapes the governance, rather than the other way around.
Lesson two: the cost shift is real. AI makes production cheap and review expensive. Any system that prices review at zero (volunteer projects, teams without code review budgets, organizations that measure output but not verification) will break under AI-assisted volume. The economics are not subtle. They are arithmetic.
Lesson three: disclosure is the minimum viable governance. The Linux kernel’s “Assisted-by” tag requirement is not a complete solution. But it establishes a baseline: the reviewer knows what they are reviewing. Without disclosure, code review becomes guesswork about provenance. With disclosure, it becomes an informed judgment about what level of scrutiny to apply. The distance between those two is the distance between governance and hope.
What Happens Next
Collina will continue to advocate for AI-assisted development. The petition signatories will continue to resist it in projects they maintain. Neither side will convince the other, because they are arguing about different things. Collina is arguing about individual productivity. The petition is arguing about collective infrastructure. Both are right within their frame.
The resolution, when it comes, will not be ideological. It will be procedural. Some projects will build governance frameworks that accommodate AI-assisted contributions under specific conditions. Others will ban them. The projects that thrive will be the ones whose governance matches their actual capacity for review, regardless of which policy they choose.
The paradox is not that an AI evangelist triggered a backlash. The paradox is that a Technical Steering Committee member submitted a contribution to a project that had no framework for evaluating it. The tools outran the institutions. That is the story of AI adoption in 2026, in open source and everywhere else.
This analysis synthesizes Matteo Collina’s “Software Engineering Splits in Three” (March 2026), the No AI Code in Node.js Core petition (March 2026), and Collina’s DCO debate response (March 2026).
Victorino Group helps organizations build governance frameworks for AI-assisted development before the debates arrive at their door. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation