- Home
- The Thinking Wire
- Free Software in the Agent Era: A Governance Reckoning
Free Software in the Agent Era: A Governance Reckoning
Richard Stallman’s “four freedoms” of free software (run, study, modify, redistribute) were always technically available. In practice, they were available to people who could program. Everyone else had the freedoms on paper and the vendor’s product in reality.
AI coding agents change this calculation. A non-technical user with an agent can fork an open-source project, modify it to their needs, and deploy it. Not in theory. Right now. The freedom to modify software has become a practical capability for the first time since Stallman wrote the GPL.
George London, CTO of Upwave, argues this could make free software matter again. He is right about the direction. But the governance consequences are larger than he explores.
Documentation Traffic as a Leading Indicator
Tailwind CSS, the popular utility-first CSS framework, saw its documentation traffic drop roughly 40% since 2023. Its creator Adam Wathan reduced the engineering team by 75% in January 2026. The documentation still exists. The framework still works. But the humans who used to read the docs are being replaced by agents that consume them programmatically.
This is the pattern we described in When Interfaces Become Disposable: agents bypass the interface layer entirely. London extends this observation to its logical endpoint. If agents can read documentation, write code, and modify open-source software on a user’s behalf, the moat of closed SaaS evaporates.
Consider Sunsama, a productivity tool. A user opened a feature request for API access in December 2019. Six years later, that request remains open. In a closed system, users wait. In an open system with an agent, users patch.
The difference between these two scenarios is not technical sophistication. It is licensing.
AGPL Becomes Strategic Again
Google maintains a broad internal ban on AGPL-licensed software. This is not a legal quirk. It is a strategic position.
The AGPL (Affero General Public License) requires that if you modify AGPL software and serve it over a network, you must release your modifications. For SaaS companies, this means your proprietary improvements cannot remain proprietary. Every competitive advantage built on top of AGPL code must be shared.
When the only people who could practically exercise these rights were engineers, AGPL was a niche concern. Now that agents can fork, modify, and deploy AGPL software for anyone, the license becomes a governance lever. Organizations choosing between open-source and proprietary infrastructure are no longer making a technical decision. They are making a power governance decision about who controls the right to modify, extend, and redistribute the software their business runs on.
London frames this as a “platform risk” calculation, and he is correct. A SaaS vendor can raise prices, deprecate features, or lock you in. An open-source project with an AGPL license cannot. The agent makes this distinction actionable for people who would never have cared before.
The Sustainability Problem Nobody Wants to Discuss
There is a counterargument, and it is serious.
Researchers at Central European University warn that AI agents may damage open-source sustainability by severing the feedback loops between users and maintainers. Today, when a developer uses an open-source library, they file bugs, submit pull requests, participate in discussions, and occasionally donate. These interactions sustain the project.
When an agent uses the same library, none of that happens. The agent reads the code, patches what it needs, and moves on. No bug report. No pull request. No community participation. The project receives consumption without contribution.
Ghostty, an open-source terminal emulator, has already responded to this pressure. It adopted a vouch-based contribution model where new contributors must be vouched for by existing members. The stated reason: AI-generated contributions were flooding the project with low-quality patches that consumed maintainer time without adding value.
This is a governance problem in the classical sense. The resource (open-source maintainer attention) is finite. The demand on that resource (AI-generated consumption and contributions) is scaling without constraint. Without governance structures that account for agent-driven participation, the commons degrades.
Open vs. Closed Is Now a Power Decision
London quotes Vitalik Buterin: “Nonzero openness is the only way that the world does not eventually converge.” The argument is philosophical but the mechanism is concrete.
Closed platforms concentrate power. The vendor decides what features exist, what APIs are available, what integrations are permitted. Users adapt to the platform’s choices or leave. In the pre-agent era, this concentration was tolerable because the cost of switching was high and the cost of building alternatives was higher.
Agents collapse both costs. Switching cost drops because agents can learn new systems quickly. Building cost drops because agents can write the glue code, migration scripts, and custom integrations that used to require engineering teams.
This means the choice between open-source and closed-source infrastructure is now a governance decision with direct power implications. Choosing a closed platform means choosing to concentrate decision-making power with the vendor. Choosing open-source (with appropriate licensing) means distributing that power to anyone with an agent.
For enterprises, this is not an ideological question. It is a risk calculation. How much decision-making power over your core infrastructure are you willing to delegate to a vendor who may not share your interests?
What Governance Looks Like Here
The free software revival through agents creates at least three governance problems that organizations need to address.
Contribution governance. If your organization uses agents to modify open-source software, who reviews those modifications? Who decides whether to contribute them upstream? The agent can write the patch. The governance question is whether the patch meets your quality standards, aligns with your security requirements, and complies with the license obligations.
Dependency governance. When agents can freely fork and modify open-source dependencies, the attack surface for supply chain vulnerabilities expands. Today, your dependency tree is relatively static. Tomorrow, your agents may be creating custom forks of libraries that diverge from the upstream project, creating maintenance obligations and security review requirements that did not previously exist.
Licensing governance. AGPL, GPL, MIT, Apache. Each license creates different obligations, and agents do not understand licensing. An agent that modifies AGPL code and deploys it as a network service has triggered obligations that your legal team may not be aware of. The speed advantage of agent-driven development becomes a liability without license compliance automation.
The Uncomfortable Question
The free software movement was built on a moral argument: users deserve control over the software they use. For decades, that argument was compelling in principle and irrelevant in practice. Most users could not exercise the freedoms that free software granted.
AI agents make those freedoms exercisable. That is genuinely new. But exercisable freedoms without governance produce chaos, not liberation. The right to modify software without the discipline to review, test, and maintain those modifications is not freedom. It is technical debt with a philosophical justification.
The organizations that benefit from this shift will not be the ones that embrace open-source uncritically or avoid it reflexively. They will be the ones that build governance frameworks for agent-driven software modification: clear policies on contribution, licensing compliance automation, dependency management, and quality controls that account for the speed at which agents operate.
Free software may matter again. But only if the governance matures as fast as the agents do.
This analysis synthesizes AI Agents Could Make Free Software Matter Again by George London (March 2026), CEU research on AI agent impacts on open-source sustainability, and the Ghostty contribution model as a case study in agent-era governance.
Victorino Group helps organizations build governance frameworks for agent-driven software development and open-source strategy. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation