Governance as Advantage

Against Cleverness: Design Principles for AI in Complex Systems

TV
Thiago Victorino
10 min read

Today we stand at the cusp of revolutions in artificial intelligence, autonomous vehicles, renewable energy, and biotechnology. Each brings extraordinary promise. Each also introduces more complexity, more interdependence, and more latent pathways to failure.

This elevates prudence to be critical. Good design recognizes what cannot be foreseen. It builds not merely for performance, but for recovery.

The Blame Reflex

When something goes wrong, our gut reaction is to find the person involved. We call this the “active failure”—the operator who pressed the wrong button, the analyst who missed the signal.

But this reflex is a vestige of an older worldview, one where human vigilance was assumed to be the primary safeguard against failure.

Now we know better.

A system is perfectly designed to get the results it gets. If a system produces recurring failures, the fault lies not with the operator but with the structure that shaped the operator’s choices.

Good design aims not at perfect people but at ordinary people performing reliably under normal conditions.

Three Frameworks That Changed Everything

1. Latent Errors: The Why

James Reason’s Swiss Cheese Model teaches us why systems fail: latent conditions accumulate, hide, and align. Every shortcut, every unexamined assumption, every added layer of complexity is a pathogen waiting for the right conditions to cause harm.

Design decisions made today become the latent failures of tomorrow.

2. The Automation Paradox: The How

The greater trust we put in automation, the more trust we must put in it—for it necessarily makes human actors weaker. When automation works, humans deskill. When automation fails, humans cannot recover.

This is a vicious cycle not easy to escape.

3. Rasmussen’s Conundrum: The Where

Automation excels in narrow, controlled environments but collapses at the edges. Superhuman peak performance means nothing if you cannot ensure conditions stay within that narrow range.

The question isn’t whether your AI is better than humans in ideal conditions. It’s whether conditions will remain ideal.

Admiral Rickover’s Wisdom

Few figures embody conservative design philosophy better than Admiral Hyman G. Rickover, the father of the nuclear navy. Under his leadership, the US designed and built the first nuclear submarines—living and working next to nuclear reactors with zero catastrophic failures.

Rickover’s philosophy was simple:

  • Favor the proven over the novel
  • Choose the simple over the clever
  • Prefer the transparent over the abstract
  • Demand direct accountability over distributed blame

He required engineers to understand every system they touched, to foresee how it could fail, and to take personal responsibility for its performance.

This philosophy is not opposed to innovation. It is opposed to undue confidence and corner cutting.

Why AI Is Different

AI is not merely another automation layer. It is a new kind of agent inside our systems—opaque, statistical, fast, and prone to unfamiliar failure modes. It makes predictions rather than following instructions. Its logic is embedded in inscrutable data patterns rather than explicit rules.

AI Accumulates Latent Failures

AI systems learn from datasets we did not fully inspect, absorb correlations we did not intend, and behave in ways that are not visible from the outside. A model might perform flawlessly for months before a quiet change in data distribution causes an abrupt collapse.

Every training decision, every data preprocessing choice, every hyperparameter is a potential pathogen. And unlike traditional software where we can inspect the logic, AI embeds these decisions in millions of parameters that no human can comprehend.

AI Erodes Human-Centered Design

A well-designed traditional system has clear cause-and-effect relationships. You turn the dial, the temperature changes. You can build a mental model of how it works.

AI systems break this clarity. You provide input, you get output, but the relationship between them is inscrutable. Why did the AI make this recommendation? What factors did it consider? What would happen if conditions changed?

These questions often have no satisfying answers.

AI Intensifies the Automation Paradox

AI failure modes are less predictable—it doesn’t just stop working; it confidently produces wrong answers. AI operates in domains requiring judgment, not just mechanical tasks. It deskills faster because it handles tasks humans used to do cognitively, not just physically.

Recovery is harder because humans may not recognize AI errors without domain expertise.

Six Principles for AI Design

1. Assume AI Will Fail

Design systems assuming AI will fail, not assuming it will work. Build clear handoff protocols when AI reaches its limits. Maintain human oversight for critical decisions. Create fallback mechanisms that don’t depend on AI.

2. Preserve Human Capability

Don’t allow AI to completely deskill human operators. Keep humans in the loop for critical decisions. Require periodic manual operation. Train for exceptions, not just normal operation.

3. Demand Transparency

Insist on explainable AI for any consequential application. Understand what factors influence decisions. Know the confidence level of predictions. Recognize when the AI is operating outside its competence.

4. Define Clear Boundaries

Explicitly define where AI should and shouldn’t be used. Set hard limits on autonomy in high-stakes situations. Maintain explicit human authority for final decisions. Accept that some tasks should never be fully automated.

5. Design for Recovery

Plan for what happens when AI fails, not just how it performs when it works. Build clear error detection and signaling. Enable graceful degradation rather than catastrophic failure. Create recovery protocols that don’t require AI expertise.

6. Take Responsibility

Maintain human accountability for AI-made decisions. Someone must always “sign their name.” Regularly review AI performance and errors. Be willing to roll back AI when it underperforms.

The Designer’s New Responsibility

In 1990, James Reason warned: “A point has been reached in the development of technology where the greatest dangers stem not so much from the breakdown of a major component or from isolated operator errors, as from the insidious accumulation of delayed-action human failures occurring primarily within the organizational and managerial sectors.”

If that was true before the internet was ubiquitous, it is exponentially more true today.

Designers inherit a new responsibility. Their task is not merely to make systems functional or efficient, but to make them understandable. To build systems with fewer hidden couplings. To reduce opacity. To create clear cause-effect relationships. To design for transparency, resilience, and recovery.

The Path Forward

The future of design requires holding two truths simultaneously:

First, technology—including AI—offers genuine benefits. It can enhance human capability, reduce errors in routine tasks, reveal patterns we couldn’t see.

Second, technology—especially AI—introduces new failure modes, new latent errors, new paradoxes that make systems more fragile precisely when they appear most capable.

The solution is not to reject technology but to deploy it with wisdom inherited from generations of systems thinking and human factors research.

The future of design is not about making systems smarter. It’s about making systems wiser.

Systems that know their limits. That acknowledge their failures. That preserve the human capabilities that technology promises to enhance but often erodes.


This analysis draws on Michael Parent’s article “Against Cleverness” (UX Collective, January 2026), which synthesizes James Reason’s Swiss Cheese Model, Rasmussen’s automation research, and Admiral Rickover’s conservative engineering philosophy.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation