- Home
- The Thinking Wire
- AI Governance Is Leaving the Engineering Silo
Three pieces of content landed on the same day last week. None of them referenced each other. None of them were written by engineers. All three described the same problem using the same vocabulary.
Adobe announced custom Firefly models that enforce brand identity across every generated image. Jack Dorsey published an essay with Sequoia’s Roelof Botha arguing that AI should replace management hierarchy entirely. And a B2B newsletter by Dan Renyi laid out a framework for treating go-to-market as an engineered system with version control, guardrails, and quality gates.
Design. Organizational structure. Marketing. Three domains, three independent conclusions, one shared realization: the governance questions engineering has been answering for two years are now everyone’s questions.
Adobe: governance becomes a product feature
When we covered AI in enterprise design workflows earlier this year, the conversation centered on how companies like Atlassian and Meta were building internal infrastructure to constrain AI output. Instruction files, design system compliance, calibration loops. All organizational processes.
Adobe just made that a product.
Firefly’s custom models let creative teams train generative AI on their own visual style. Lighting, color palettes, line weights, compositional tendencies. The model learns the brand’s visual language and enforces it across every output. Models are private by default. Style is captured upfront, not corrected after the fact.
The numbers tell the adoption story. According to Adobe, 86% of creators now use generative AI, and 76% say it helped grow their business or brand. Amazon Fresh cut image turnaround time by 93%. Newell Brands produces campaign visuals 5x faster. IPG Health built an entirely new brand identity in 10 days.
But the interesting part is not the speed. It is what happens when speed meets inconsistency. Run a generative model without style constraints across 30 markets and hundreds of assets, and brand coherence dissolves. Adobe’s response is governance-as-guardrails: encode the rules into the model itself so that compliance is a property of the output, not a quality-control step applied afterward.
This is the exact pattern engineering teams built for code: linting, type checking, and CI validation catch violations before they ship, not after. Adobe is selling that same idea to designers. The vocabulary is different. The underlying architecture is identical.
Dorsey: governance of the organization itself
Jack Dorsey’s essay with Roelof Botha, published on Sequoia’s site, makes a sweeping argument. Hierarchical organizations exist because human cognition has a hard limit on coordination. Leaders can manage three to eight people effectively. The Roman Army solved this by nesting small units into larger ones. Every corporation since has done roughly the same.
Dorsey argues that AI breaks this constraint. If an AI “world model” can hold the entire company’s context (goals, progress, dependencies, priorities), then the information-routing function of middle management becomes redundant. Instead of status meetings and alignment sessions, you get a system that knows what everyone is working on and can coordinate directly.
He proposes three roles to replace the hierarchy: individual contributors (deep specialists), directly responsible individuals (who own cross-cutting problems for 90-day cycles), and player-coaches (who mentor and unblock without managing). Block is actively building this.
The governance implications are significant and largely unaddressed in the essay. When an AI system decides which problems are priorities, assigns ownership, and coordinates execution, who audits those decisions? Algorithmic bias in hiring is already a legal minefield. Algorithmic bias in task assignment, performance evaluation, and resource allocation introduces the same risks inside the organization.
Dorsey cites failed flat-org experiments: Spotify reverted its squad model, Zappos lost significant staff during its Holacracy transition, Valve struggled to scale past a few hundred people. His argument is that those experiments failed because they removed hierarchy without replacing its coordination function. AI, he believes, provides the replacement.
Maybe. But every failed flat-org experiment also failed at governance. Removing structure without replacing it with explicit rules about who decides what, with what authority, and subject to what review, produces chaos. An AI coordinator is still a coordinator, and coordinators need constraints.
As we explored in agent teams as an operating model, the shift from writing code to directing AI work demands new governance infrastructure. Dorsey’s proposal extends that insight from engineering teams to the entire company. The question is whether organizations will build the governance before they flatten the hierarchy or discover the need afterward.
Marketing: engineering vocabulary, marketing problems
Dan Renyi’s newsletter “GTM as Product” reads like a software architecture document that wandered into a marketing meeting. He proposes six components for treating go-to-market as an engineered system: centralized memory, contextual intelligence, repeatable workflows, strategic guardrails, quality gates, and version-controlled history.
That list is CI/CD for marketing. And Renyi is explicit about why it matters.
“The faster you move, the more certain you need to be about direction,” he writes. His concern is specific: if your positioning is slightly misaligned and you produce content manually, you publish two or three pieces a week. The damage is contained. Run that same misalignment through an agentic system, and you get twenty social posts and ten articles amplifying the wrong message before anyone notices.
This is the same velocity-risk dynamic we identified in advertising governance: autonomous systems making decisions at machine speed, measured with human-speed accountability. The delta between action and oversight is where damage accumulates.
Renyi’s contribution is making the solution explicit. Version control means you can roll back. Quality gates mean misalignment gets caught before publishing. Centralized memory means every piece of content draws from the same strategic source. These are not metaphors borrowed from engineering. They are the same controls, applied to different material.
The convergence is the story
Any one of these developments is interesting. Together, they reveal a structural shift.
Engineering teams started building AI governance infrastructure around 2024. They built it because they had to: production incidents, security vulnerabilities, model drift, and regulatory pressure forced the issue. The tools are now mature enough to work. Policy engines, escalation workflows, audit trails, automated testing, version control, rollback mechanisms.
What is happening now is that every other function is arriving at the same conclusion through different paths. Adobe’s designers need brand consistency at generative scale. Dorsey’s organizational architects need coordination without hierarchy. Renyi’s marketers need quality control at agentic speed.
The vocabulary converges because the underlying problem converges. When AI systems make decisions autonomously and at speed, you need:
Constraints defined upfront. Not reviewed afterward. Adobe encodes brand rules into the model. Renyi encodes positioning into centralized memory. Engineering encodes requirements into test suites. Same principle, different material.
Escalation paths when constraints fail. Adobe’s models are private by default and scoped to specific brand contexts. Engineering has incident response and rollback. Marketing and organizational design are still building these mechanisms.
Audit capacity after the fact. You need to reconstruct what decisions were made, by which system, with what inputs, and whether those decisions aligned with policy. Engineering has logging and tracing. Everyone else is working with spreadsheets or nothing.
Boundary evolution as capabilities change. AI systems improve continuously. The rules constraining them must evolve at a comparable pace. Quarterly policy reviews are already outdated when they conclude.
What this means for organizations
The practical implication is that governance is becoming a horizontal discipline. Not an engineering specialty. Not a compliance function. A capability that every team deploying AI needs, in the same way that every team deploying software needs version control.
As we noted in The Automation Curve Is Really a Governance Curve, the question was never “what can AI do?” It was always “what should AI be allowed to do, and who decides?” That question has now migrated from engineering into design studios, marketing departments, and boardrooms debating organizational structure.
Organizations that treat governance as an engineering concern will find themselves rebuilding the same infrastructure, with the same lessons, across every function that adopts AI autonomy. Organizations that recognize the horizontal pattern early can build governance once and adapt it across functions.
The three articles from last week are a signal. Design, marketing, and organizational architecture are all reaching for engineering’s governance vocabulary because no other vocabulary exists for the problem they face. The question is whether they will also adopt engineering’s governance discipline, or just borrow the words.
This analysis synthesizes Adobe Firefly Custom Models from Creative Bloq (March 2026), From Hierarchy to Intelligence by Jack Dorsey and Roelof Botha via Sequoia (March 2026), and GTM as Product by Dan Renyi (March 2026).
Victorino Group helps organizations extend AI governance beyond engineering into design, marketing, and organizational infrastructure. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation