- Home
- Thinking...
- Company as Code: The Missing Governance Layer for AI Agents
Company as Code: The Missing Governance Layer for AI Agents
Daniel Rothmann, who runs the technical advisory firm 42futures, recently proposed an idea worth taking seriously: treat your organizational structure as code. He calls it “Company as Code” --- a manifest file for your company that defines roles, policies, org units, and compliance mappings in a declarative DSL, version-controlled and queryable like any other infrastructure.
His motivation was compliance. During an ISO 27001 audit, he watched his firm burn hundreds of person-hours documenting organizational structures that already existed digitally everywhere else. The irony was obvious: a software company that manages infrastructure programmatically still represents its own organization as a collection of documents.
Rothmann’s observation is correct. His proposed solution is directionally right. But his framing undersells the real opportunity by an order of magnitude.
The most important reason to codify your organizational structure is not compliance automation. It is that your next employees cannot read documents.
The Problem That Compliance Doesn’t Capture
When Rothmann describes Company as Code, he emphasizes queryable compliance mappings, version-controlled policy changes, and impact analysis before organizational restructuring. These are real benefits. They save audit hours. They reduce risk.
But they solve a problem for humans.
The larger structural shift is that organizations are adding a new category of worker --- AI agents --- that operates fundamentally differently from human employees. Human employees can read a Confluence page about approval workflows, infer organizational norms from hallway conversations, and figure out who to ask when a process is unclear. AI agents cannot do any of this.
An AI agent needs machine-readable answers to four questions:
- Who can do what? Roles and permissions as enforceable constraints, not narrative descriptions.
- What rules apply? Policies as executable logic, not PDF documents.
- How does work flow? Process definitions with explicit handoff points, approval gates, and escalation paths.
- Where does the agent fit? Its own position in the organizational graph --- what it can access, what it cannot, and who oversees it.
Without machine-readable answers to these questions, every agent deployment requires bespoke governance engineering. You hardcode boundaries into each agent’s prompt. You build custom approval flows for each workflow. You create ad hoc permission systems that nobody maintains. This is the organizational equivalent of writing shell scripts instead of using Terraform.
Conway’s Law Needs an Update
In 1967, Melvin Conway observed that organizations produce system designs that mirror their communication structures. A company with four teams building a compiler will produce a four-pass compiler. The observation has held up remarkably well for nearly sixty years.
The insight behind Conway’s Law is that organizational structure is an invisible but powerful constraint on what gets built. Teams that cannot communicate easily will not build tightly integrated systems. Reporting lines shape architectural boundaries.
But Conway assumed all the actors in the organization were human. The “communication structures” he described were human communication structures --- meetings, memos, reporting hierarchies.
When your workforce includes AI agents, Conway’s Law still applies, but with a new implication. Agents do not attend meetings. They do not read memos. They do not absorb organizational culture through proximity. For an AI agent, the organizational structure it can “see” is limited to whatever has been made machine-readable. Everything else is invisible.
This means organizational structure must be expressed as code --- not primarily for compliance efficiency, but because a growing portion of your workforce literally cannot perceive it otherwise. The org chart that exists only in a slide deck, the approval process that lives in someone’s head, the policy that was communicated verbally in a team meeting --- none of these exist for an AI agent.
Conway’s Law inverted: when your employees include machines, your org structure does not just shape the code. Your org structure must become code.
What Already Exists (And What Doesn’t)
The building blocks for Company as Code are not hypothetical. Several domains have already made this transition.
Infrastructure as Code is mature. Terraform and Pulumi turned infrastructure provisioning from manual processes into declarative, version-controlled configurations. The pattern is proven: define the desired state, let the system converge.
Policy as Code is operational. Open Policy Agent (OPA) and HashiCorp Sentinel codify authorization and compliance policies as executable logic. Kubernetes admission controllers enforce them at runtime. The concept of a policy that a machine can evaluate --- not just a human --- is well-established.
Configuration as Code is ubiquitous. CI/CD pipelines, monitoring rules, feature flags --- the operational layer of most software organizations already lives in version control.
What does not exist yet is the organizational layer. The part that defines: “This is a team. These are its responsibilities. This person has this role. This role can approve these types of decisions. This policy applies to these organizational units.” That layer still lives in HR systems, Confluence pages, slide decks, and institutional memory.
Rothmann’s proposed DSL for defining organizational entities --- roles, people, org units, policies, compliance mappings --- is a reasonable starting point for closing this gap. His insight that organizational relationships form a cyclic graph (unlike the directed acyclic graphs of infrastructure dependencies) is technically important. People belong to teams that depend on policies that reference roles that are held by people. The circular references are a feature, not a bug. They reflect how organizations actually work.
The Agent Governance Connection
We have written previously about the governance challenges of multi-agent systems. When Carlini ran 16 AI agents on a compiler project, most of his effort went into coordination infrastructure: locking mechanisms, task-claiming protocols, merge conflict handling. This was governance work, not coding work.
Anthropic’s Claude constitution demonstrates the same principle at the model level: a structured hierarchy of priorities (safety, ethics, compliance, helpfulness) with clear rules about what is hardcoded versus what operators can customize.
Company as Code is the organizational equivalent. It provides the structural context that agents need to operate within boundaries --- not because someone hardcoded those boundaries into a prompt, but because the boundaries are defined in a queryable manifest that any agent can reference.
Consider a practical scenario. An AI agent processing expense reports needs to know: What is the approval threshold for this employee’s role? Who is the approver? Does the expense category require additional compliance review? In a document-based organization, a human would know the answers from experience. An AI agent needs a structured data source. A company manifest provides exactly this.
Or consider agent-to-agent coordination. When multiple agents operate across different business functions, they need to understand organizational boundaries. Can the sales agent commit to a delivery timeline? Does the procurement agent have authority to approve a vendor? These are organizational questions with answers that should be queryable, not embedded in individual agent configurations.
The Practical Starting Points
The vision of a complete company manifest is ambitious. The practical path starts smaller.
Codify your approval chains. Most organizations can identify their critical approval workflows. Express them as structured data: who approves what, at what thresholds, with what escalation paths. This is immediately useful for any agent that participates in approval-gated processes.
Define agent roles explicitly. When you deploy an AI agent, define its organizational position the way you would define an employee’s role. What can it access? What decisions can it make autonomously? What requires human approval? Store these definitions in version control, not in prompt templates.
Map your policy constraints. Start with the policies that most frequently affect agent behavior: data access policies, communication policies, financial authorization limits. Express them in a format that agents can query at runtime.
Version control the changes. The power of any “as code” approach is the audit trail. When a policy changes, when a role is modified, when an approval chain is updated --- these changes should be tracked, reviewed, and deployable like any other code change.
None of this requires building Rothmann’s full DSL or graph database architecture. It requires the decision to treat organizational structure as a first-class input to your technology stack rather than a background assumption.
What Happens If You Don’t
The default path is already visible. Organizations deploy agents with governance baked into individual implementations. Each agent has its own understanding of organizational boundaries, encoded in its configuration. When the organization changes --- new approval limits, restructured teams, updated policies --- someone has to update every agent individually. Or nobody does, and the agents operate under stale assumptions.
This is the same problem Infrastructure as Code solved for servers. Manual, per-instance configuration that drifts from intended state over time. The solution was the same then as it is now: declare the desired state in one place, and let everything reference it.
The organizations that will operate AI agents effectively at scale are the ones that build the organizational data layer those agents need. Not because a consultant told them to, and not because an ISO auditor required it. Because their workforce now includes entities that cannot function without it.
Sources
- Daniel Rothmann. “Company as Code.” 42futures blog, February 2025.
- Melvin E. Conway. “How Do Committees Invent?” Datamation, April 1968.
- Nicholas Carlini. “Building a C compiler with Claude.” Anthropic Research Blog, February 2026.
- HashiCorp. “Policy as Code.” Sentinel documentation, 2025.
- Open Policy Agent. “Introduction to OPA.” openpolicyagent.org, 2025.
- Anthropic. “Claude’s Constitution.” anthropic.com, January 2026.
Victorino Group helps organizations build the governance infrastructure that autonomous AI agents require to operate within defined boundaries. If you are evaluating agent deployment and need help designing the organizational layer that makes it work, reach out.
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation