The AI Control Problem

Your AI Stack Has a GitHub Actions Problem

TV
Thiago Victorino
8 min read

Five companies exist specifically to make GitHub Actions faster. Namespace, Blacksmith, BuildJet, Actuated, RunsOn. Each sells the same promise: your CI/CD will run at the speed it should have run in the first place.

That fact deserves more attention than it gets. Not because GitHub Actions is broken. It processes 71 million jobs daily. It works for the vast majority of teams. The fact deserves attention because it reveals a pattern that is now repeating, at much larger scale, in AI tool adoption.

When an entire cottage industry exists to patch the problems of a platform, it means governance was ceded at the point of adoption. The workarounds are not the story. The story is how you got to a place where you need them.

The Convenience-to-Control Arc

GitHub Actions followed a trajectory so predictable it could serve as a template.

Step 1: Bundled convenience. GitHub Actions shipped embedded in GitHub. No new vendor relationship. No procurement process. No security review. It was just there, a tab in the repository you already used. The adoption decision was not a decision at all. It was a default.

Step 2: Broad adoption without governance review. Teams adopted because the friction was zero, not because someone evaluated the CI/CD landscape and concluded that GitHub Actions best served the organization’s architectural needs. Convenience, not fitness, drove the choice.

Step 3: Silent lock-in. Thousands of YAML workflow files accumulated. Each one referenced marketplace actions --- third-party code executing in your CI pipeline with whatever permissions the workflow granted. Custom runners were configured. Secrets were stored. The switching cost grew invisibly, one commit at a time.

Step 4: Structural problems surfaced. In March 2025, the tj-actions/changed-files supply chain attack compromised more than 23,000 repositories. CVE-2025-30066 earned a CISA advisory. A single marketplace action --- third-party code that teams had adopted because it was convenient --- became the vector. The attack exploited precisely the trust model that made adoption frictionless: anyone can publish an action, and most teams do not audit what they pull in.

Step 5: A patch ecosystem appeared. Five companies built businesses on making GitHub Actions performant. GitHub’s own larger runners, YAML anchors, and attestation features appeared. Each patch addressed a symptom. None addressed the structural cause: the organization had no governance framework at the point of adoption, and retrofitting one is vastly harder than building one.

Step 6: The platform revealed its pricing power. In December 2025, GitHub attempted to charge $0.002 per minute for the self-hosted runner control plane --- the infrastructure organizations built specifically to avoid paying GitHub’s hosted runner prices. The backlash forced a reversal. But the signal was clear: when you depend on a platform deeply enough, the platform can change the terms.

Step 7: Migration became prohibitively expensive. At this point, moving to CircleCI, GitLab CI, Buildkite, or any alternative means rewriting every workflow file, re-auditing every marketplace action dependency, reconfiguring every secret, and retraining every developer. The cost is not technical complexity. It is accumulated organizational inertia. You could migrate. You will not.

This is the convenience-to-control arc. It is not unique to GitHub Actions. It is the standard trajectory of any technology adopted for convenience without governance at the point of entry.

The Patch Ecosystem Test

Here is a diagnostic tool that works across technology categories.

If a mature ecosystem of third-party products exists to make your tool safe, fast, or reliable, you never had control of that tool. You had convenience, and now you are paying the difference.

The five runner-acceleration companies are not a sign that GitHub Actions has a performance problem that smart vendors are solving. They are a sign that organizations outsourced a critical infrastructure decision to a platform default, and are now outsourcing the consequences to a different set of vendors.

Each additional vendor in the patch ecosystem increases your dependency graph while appearing to reduce your pain. The acceleration feels like progress. It is actually compounding the original governance failure.

The Same Arc Is Happening in AI

Map the pattern.

Bundled convenience. Cloud providers ship AI APIs alongside compute, storage, and networking. Signing up for an AI capability requires no new vendor relationship. It is a checkbox in the console you already use. ChatGPT is already in the browser your team opens every morning. The adoption decision is not a decision. It is a default.

Broad adoption without governance review. Forty-four percent of organizations have business units deploying AI without IT or security involvement. The tools are adopted because they are available, not because someone evaluated them against the organization’s data governance requirements.

Silent lock-in. Prompt templates accumulate. Fine-tuned models are trained on proprietary data and tied to a specific vendor’s infrastructure. Custom integrations multiply. Each one increases switching cost. None of them feel like lock-in while they are being built. They feel like productivity.

Structural problems are surfacing. The average enterprise experiences 223 data policy violations involving AI applications per month. Eighty-three percent of organizations use AI daily, but only 13% have strong visibility into how these tools handle their data. This is the supply chain attack equivalent: the trust model that made adoption frictionless is the same model that creates ungoverned data flows.

The patch ecosystem is forming. Guardrail vendors. Prompt injection detectors. Hallucination checkers. AI firewall products. Data loss prevention tools adapted for LLM traffic. Each one addresses a symptom. Each one adds a vendor. None addresses the structural cause: governance was absent at the point of adoption.

Pricing power will be revealed. It has already started. Token pricing changes, context window pricing tiers, and the shift from flat-rate APIs to consumption-based models are the early signals. When organizations discover how deeply AI is embedded in their workflows, they will also discover how little leverage they have over the terms.

Migration will be prohibitively expensive. Not because any single AI tool is hard to replace. Because the accumulated integration --- the prompt libraries, the fine-tuned models, the workflow automations, the data pipelines feeding into vendor-specific formats --- creates organizational inertia that makes switching theoretically possible and practically unlikely.

The parallels are not metaphorical. They are structural. Marketplace actions map to AI plugins. Runner compute maps to inference compute. YAML configurations map to prompt templates. Supply chain attacks map to prompt injection. Runner accelerators map to guardrail vendors.

How to Spot It in Your Own Stack

The diagnostic is simple and uncomfortable.

Count your patch vendors. For every core platform in your stack, count the number of third-party products you use to make it secure, performant, or compliant. If the number is greater than two, you have a governance gap at the platform level. The patches are treating symptoms.

Trace the adoption decision. For each AI tool in production use, identify who made the decision to adopt it, what governance review occurred, and what exit criteria were defined. If the answer to any of those is “nobody,” “none,” or “none,” you are in the convenience-to-control arc and you have not yet reached the expensive steps.

Test your switching cost. Pick one AI tool your team uses daily. Estimate, concretely, what it would take to move to a competitor. If the answer involves rewriting prompt libraries, retraining models, rebuilding integrations, and retraining users, you have lock-in. Whether you call it lock-in is a matter of preference. Whether you have it is a matter of fact.

Look for the cottage industry. If vendors are selling products specifically to manage the risks or limitations of another product you use, that is the signal. The cottage industry is the market telling you that governance is missing. It is useful to listen.

What Engineering Leaders Should Do

The GitHub Actions story does not end with a recommendation to switch CI providers. That would miss the point entirely. Many teams should stay on GitHub Actions. It is a good product that works well for most use cases. The insight is not about the tool. It is about the decision process.

Govern at adoption, not after pain. The cost of evaluating an AI tool before deployment is a week of analysis. The cost of migrating after two years of ungoverned adoption is months of engineering time and organizational disruption. The math is not close.

Separate the platform from the decision. When an AI capability comes bundled with a platform you already use, that bundling is a distribution strategy, not a technical recommendation. Evaluate it on its merits. Compare it to alternatives. Define exit criteria before you enter. The friction of a deliberate evaluation is the governance.

Budget for independence. Abstraction layers, vendor-agnostic prompt formats, portable model evaluation frameworks --- these cost engineering time to build. They also cost less than a forced migration under deadline pressure. The organizations that invested in CI/CD abstraction before GitHub Actions had the smoothest response to the runner pricing scare. The same principle applies to AI.

Do not mistake the patch ecosystem for maturity. A rich ecosystem of guardrail vendors does not mean AI governance is solved. It means AI governance is failing at a scale large enough to be profitable. The guardrails are useful. They are not a substitute for governed adoption.

The Reckoning That Doesn’t Have to Happen

The GitHub Actions convenience-to-control arc took roughly five years from broad adoption to structural pain. AI tool adoption is moving faster, with higher stakes. The data flowing through AI tools is more sensitive than CI/CD metadata. The decisions being influenced by AI outputs are more consequential than build times. And the lock-in mechanisms --- fine-tuned models, proprietary prompt libraries, vendor-specific integrations --- are stickier than YAML files.

Ian Duncan, who wrote the original analysis of GitHub Actions, is a former CircleCI employee. He has a competitor’s bias, and that should be acknowledged. GitHub Actions genuinely works for the majority of its users, and GitHub has been improving the product with larger runners, better caching, and supply chain attestations.

But the pattern he identified is real, and it generalizes. Convenience-driven adoption without governance review leads to lock-in, which leads to structural vulnerability, which leads to a cottage industry of patches, which leads to a reckoning when the platform exercises its pricing power. This is not a story about CI/CD. It is a story about how organizations adopt technology.

The organizations that will avoid the AI version of this reckoning are the ones making deliberate decisions now. Not choosing the most sophisticated tool. Not avoiding AI. Choosing consciously, with governance at the point of entry, so that the convenience does not become a trap.

The best time to govern AI adoption was before your team started using it. The second best time is before the patch ecosystem becomes your primary vendor relationship.


Sources

  • Ian Duncan. “GitHub Actions Is Slowly Killing Your Engineering Team.” February 5, 2026.
  • CVE-2025-30066. tj-actions/changed-files supply chain attack. CISA advisory, March 2025. 23,000+ repositories compromised.
  • GitHub. Self-hosted runner control plane pricing attempt. December 2025. Reversed after community backlash.
  • Netskope Threat Labs. AI data policy violations: 223 incidents/month average per enterprise. 2025.
  • Salesforce. Survey: 83% daily AI usage, 13% strong visibility into data handling. 2025.
  • CIO.com. Survey: 44% of organizations have business units deploying AI without IT/security involvement. 2025.

Victorino Group helps organizations build AI governance at the point of adoption, not after the lock-in compounds. If your team is deploying AI tools and wants to make deliberate decisions before they become expensive ones, reach out.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation