Key Takeaways
- →The 64-point gap between AI use and governance is a design problem, not an ethics problem
- →The best governance is invisible — paved roads, not policy documents
- →Apple’s design philosophy applies directly to governance architecture
- →Governance that creates friction will be routed around, not adopted
- →Behavioral nudges outperform compliance mandates for governance adoption
Apple's design philosophy applied to building governance people actually use
Design Is How It Works
Steve Jobs said it plainly: "Design is not just what it looks like and feels like. Design is how it works." Your AI governance program looks great. Policies, committees, review boards, impact assessments, risk registers. But does it work? When was the last time your governance actually changed a deployment decision? When was the last time an engineer chose to use your governance tools voluntarily — not because compliance required it, but because the governance made their work better?
The evidence suggests the answer is rarely. 78% of organizations now use AI in their operations, yet only 14% have enterprise-level AI governance frameworks in place. That is a 64-percentage-point gap. And it is not an ethics problem. It is not a regulation problem. It is not a staffing problem. It is a design problem. Organizations have governance. People do not use it. The governance was designed for the governance team, not for the people who need it.
Teams spend 56% of their time on governance-related activities when using manual processes — more than half of AI talent focused on compliance paperwork instead of value creation. That is the equivalent of a mobile app that takes 30 seconds to open. Nobody would use it. Nobody would blame the user for uninstalling it. They would blame the designer. The same standard should apply to governance.
This article applies the design philosophy that created the iPhone, the Braun T3 radio, and GOV.UK to the governance programs that govern AI. It translates Dieter Rams' 10 principles of good design into 10 principles of good governance. It applies the Stanford d.school's five-stage design thinking process to governance creation. And it articulates a vision: the best governance, like the best infrastructure, is invisible. You do not notice plumbing when it works. You do not notice electricity when it flows. And you should not notice governance when it enables responsible AI.
If your governance program were an app in the App Store, what would its rating be? If the answer is less than 4 stars, the problem is not your users. The problem is your design.
The Design Audit
Dieter Rams' 10 Principles of Good Design → 10 Principles of Good Governance
Anticipates new risks before they emerge
Makes responsible decisions easier, not harder
Coherent structure practitioners can see and trust
Self-explanatory without a 50-page manual
Embedded in workflows, not bolted on
Acknowledges what it cannot cover
Principle-based, not rule-based
Consistent from boardroom to deployment
Respects practitioners' time and attention
Less, but better — every element earns its place
“Weniger, aber besser”
Less, but better. — Dieter Rams' personal motto
Score your governance program against each principle. Below 7/10 on any = a design failure waiting to happen.
The Governance UX Problem
What if we evaluated governance like a product?
If your governance program were a product, it would be pulled from the shelf. The metrics tell the story: only 14% of organizations have enterprise-level AI governance despite 78% using AI. 56% of practitioner time consumed by manual governance processes. 76% of organizations fail in multi-departmental governance coordination. In product design, these numbers would constitute a crisis. A product with 14% adoption, 56% friction, and 76% coordination failure would be redesigned immediately — or killed.
But governance is not evaluated like a product. It is evaluated like a policy: did we write it? Did we publish it? Did people sign it? These are compliance metrics, not design metrics. They measure existence, not effectiveness. They are the governance equivalent of measuring an app's success by how many people downloaded it, ignoring that 86% uninstalled it within a week.
The user nobody designed for
Every governance program has users — and almost none were designed for them. The ML engineer who needs to classify a model's risk level at 4pm on a Friday. The data scientist selecting training data who needs guidance on what is permissible. The product manager deciding whether a feature needs an ethics review. These practitioners are not the governance team's stakeholders. They are governance's users. And nobody asked them what they need.
Apple's design approach involves not asking customers what they want but understanding what they need through observation. Similarly, governance designers should not ask practitioners "what governance do you want?" — they will say "none." Instead, observe where risk decisions are actually made: in code reviews, model selection meetings, data pipeline design, feature prioritization. Then design governance to be present at those exact moments, with exactly the information needed, in exactly the format that helps.
When users bypass your product
High governance bypass rates are the most diagnostic metric available. The IAPP AI Governance in Practice Report documents organizations with formal governance processes that have never actually changed a deployment decision. The checklist satisfies the process; the risk remains. When practitioners circumvent governance, the governance has a UX problem, not a compliance problem. Nobody bypasses a product they find useful.
JetBrains' March 2026 analysis is blunt: governance that merely satisfies compliance checklists while leaving operational reality unexamined does not constitute true operational governance. Organizations cannot truly govern AI if they cannot observe behavior, enforce controls at runtime, and understand economic impact. Their prescription: governance must be embedded by design — woven into how agents are built, orchestrated, deployed, and monitored from day one.
If people circumvent your governance, do not blame the people. Redesign the governance. Netflix does not force developers onto paved roads — the roads are so well-maintained that developers choose them voluntarily. That is the design standard.
Dieter Rams' 10 Principles of Good Governance
Dieter Rams, the legendary Braun designer who shaped a generation of industrial design — and directly influenced Jony Ive and Apple's aesthetic — developed 10 principles of good design approximately 50 years ago. They have never been applied to governance. They should be. What follows is the governance translation of each principle — the signature intellectual property of this article.
Principle 1: Good Governance Is Innovative
Good governance anticipates new risks rather than regulating the last generation. Singapore's Model AI Governance Framework for Agentic AI, launched in January 2026, governs tomorrow's technology — autonomous agents, multi-agent coordination, delegation chains — rather than yesterday's chatbots. The framework was developed with input from over 70 global organizations including OpenAI, Google, Microsoft, and Anthropic. Innovation in governance means the same thing it means in product: solving problems people do not know they have yet.
Principle 2: Good Governance Is Useful
Good governance makes responsible AI decisions easier, not harder. The "it just works" standard: can a practitioner complete governance requirements in minutes, not days? McKinsey reports that AI systems developed with governance by design reduce compliance costs by 30%. Accenture found that automating compliance checks reduces governance implementation time by 40%. Governance that saves time is governance that gets used. Governance that costs time gets bypassed.
Principle 3: Good Governance Is Aesthetic
Coherent and elegant — practitioners can see its logic and appreciate its structure. Anthropic's Responsible Scaling Policy demonstrates this principle: clean structure, proportional safeguards that scale with capability levels, readable documentation that a non-specialist can follow. Compare this with the average enterprise governance document — 50 pages, written by committee, internally contradictory, formatted in 10-point Times New Roman. Aesthetic governance is not superficial. It signals that someone cared enough to organize complexity into clarity.
Principle 4: Good Governance Is Understandable
Self-explanatory without a 50-page manual. GOV.UK's design principle: "Do the hard work to make it simple." Simplicity in governance is the output of complex thinking, not the avoidance of it. As Jony Ive said: "True simplicity is derived from so much more than just the absence of clutter and ornamentation. It is about bringing order to complexity." The test: can any practitioner explain what is required and why in 60 seconds? If not, redesign.
Principle 5: Good Governance Is Unobtrusive
Embedded in workflows, not bolted on. This is the principle that separates designed governance from imposed governance. Netflix's 'paved road' concept provides an opinionated, supported path from idea to production — developers can go off-road, but the paved road is so well-maintained that most stay on it voluntarily. Spotify's 'golden path' provides similarly opinionated guidance. The governance translation: instead of requiring developers to stop and get approval, embed governance checks into the CI/CD pipeline, the model registry, the data catalog. Governance as the path of least resistance, not the roadblock in the way.
Design Thinking for Governance
Stanford d.school's five-stage process applied to governance design
Empathize: Understand the governance user
Shadow ML engineers. Watch where risk decisions are actually made. Observe, do not ask — people will say they want no governance but need guidance at code review, model selection, and data pipeline moments.
Most governance programs skip four of these five stages and go directly to implementation.
Principle 6: Good Governance Is Honest
Honest governance does not pretend to cover risks it cannot. It does not perform compliance without substance. It acknowledges where it stops working. Anthropic's RSP openly acknowledges that some parts of their original theory of change have not played out as hoped — and publishes that admission. This connects directly to B13's structural analysis of governance limitations and B14's examination of governance theatre. Governance that claims completeness is dishonest by definition. Governance that says "here is what we cover and here is where we stop" earns trust.
Principle 7: Good Governance Is Long-Lasting
Principle-based, not rule-based. Rules become obsolete when the technology shifts; principles adapt. Estonia's approach builds governance as architecture — their Data and AI White Paper 2024-2030 sets human-centric, trustworthy principles operationalized through short-term action plans. The principles endure; the action plans iterate. Singapore's framework demonstrates the same pattern: principles aligned with OECD, EU, UK, and US assurance models, with implementation guidance that updates as technology evolves. A rule that says "all models must be reviewed quarterly" becomes irrelevant when models update continuously. A principle that says "risk must be assessed proportionally to impact" remains useful forever.
Principle 8: Good Governance Is Thorough Down to the Last Detail
Consistent from boardroom to deployment pipeline. No governance gaps between policy and practice. AWS's governance by design framework demonstrates a three-layered implementation: enterprise level (automated security and compliance policies through policy as code), business level (data policies supporting AI solutions within the value stream), and solution level (individual AI model risks and performance thresholds). The detail matters. A governance program that has board-level principles but no deployment-level controls is like an iPhone with a beautiful exterior and no battery.
Principle 9: Good Governance Is Resource-Conscious
Good governance does not waste practitioners' time, organizational attention, or leadership bandwidth. It respects the resources of the people and the organization it serves. 56% of practitioner time on manual governance is not just a productivity loss — it is governance consuming the resources it was designed to protect. Companies with established responsible AI programs report 42% improved business efficiency and 34% increased consumer trust. Resource-conscious governance has a positive ROI, as the A3 analysis demonstrates in detail.
Principle 10: Good Governance Is As Little Governance As Possible
Dieter Rams' personal motto: "Weniger, aber besser" — less, but better. Steve Jobs explained Apple's focus: "People think focus means saying yes to the thing you've got to focus on. But that's not what it means at all. It means saying no to the hundred other good ideas that there are." Every governance element must earn its place. If it does not reduce risk or enable better decisions, remove it. The most effective governance programs are ruthlessly curated, not comprehensive. Another policy, another committee, another review step, another form — each one that does not directly reduce risk is governance clutter.
“Less, but better.”
These 10 principles are not abstract philosophy. They are a design audit for your governance program. Score your program against each one. Any principle you score below 7/10 is a design failure waiting to become a governance failure.
Two Governance Designs, Same Process
The same AI deployment governed two ways — friction vs. flow
Same governance outcome. Different design. The flow version embeds checks, automates routing, and surfaces guidance in context.
The Apple Design Process Applied to Governance
The Stanford d.school's five-stage design thinking process — Empathize, Define, Ideate, Prototype, Test — provides a rigorous methodology for designing governance programs that people actually use. Most governance programs skip four of these five stages. They go directly to implementation (which is not even on the list) without understanding users, framing problems, generating solutions, or testing before deployment.
Empathize: Understand the Governance User
Before designing any governance process, observe how practitioners actually make AI decisions. Where do they encounter risk? When do they need guidance? What information do they need, in what format, at what moment? Design thinking tools include intensive community consultations, surveys, interviews, and observation sessions. Applied to governance: shadow an ML engineer for a week. Watch where they make decisions that have risk implications. Note what information they lack. Design governance to be present at those exact moments — not in a portal they must remember to visit, but in the tools they already use.
Define: Frame the Real Problem
Most governance programs define the problem wrong. Wrong framing: "We need to comply with the EU AI Act." Right framing: "How might we help ML engineers make better risk decisions without slowing their work?" The first framing produces compliance processes. The second produces governance people actually use. Jobs's approach was understanding customers, not asking them — because if you ask people what they want, they will describe an incremental improvement to the current experience. Design starts from the user's need, not the organization's requirement.
Ideate: Generate Governance Solutions, Not More Process
Invite diverse stakeholders — not to each add their requirements (the committee anti-pattern that produces governance serving every function but no practitioner), but to co-create solutions that serve the user. The ideation constraint: every proposed governance element must reduce friction OR reduce risk. If it does neither, it does not belong. This is Jobs's "1,000 No's for Every Yes" applied to governance. Most governance programs fail by addition — another policy, another review step, another form. Design-thinking governance succeeds by subtraction.
Prototype: Test Governance Before Deploying It
Prototyping adds a layer of de-risking new policies and ensures that resources are well-considered. Before rolling out a new impact assessment process to 500 engineers, test it with 5. Before publishing a 40-page AI policy, test whether practitioners can find what they need in under 60 seconds. Before creating a new review committee, prototype the review process with real cases. OvalEdge's analysis confirms: AI governance best practices emphasize starting at design, not deployment. Yet the overwhelming pattern is governance as a post-hoc compliance exercise — build first, govern later. This is equivalent to designing a building's electrical system after construction is complete.
Test: Measure Governance Like a Product
The test is not "did we implement the process?" but "did the process change behavior?" Measure adoption rates, time-to-comply, bypass rates, practitioner satisfaction. Organizations implementing effective governance frameworks achieve 30% faster project delivery and 40% improved stakeholder satisfaction. Comprehensive training and phased implementation show 70% higher adoption rates than abrupt transitions. If your governance has low adoption, iterate the design. If bypass rates are high, the governance has a UX problem. Treat every metric as a design signal, not a compliance score.
Governance UX Dashboard
Design metrics that measure effectiveness, not existence
Adoption Rate
Target: >80% voluntary
Teams voluntarily using governance tools
Time to Comply
Target: <15 min
Average time to complete governance requirement
Bypass Rate
Target: <5%
Practitioners circumventing governance
Developer Satisfaction
Target: NPS >0
Would recommend governance process
Decision Impact
Target: >20%
Governance changes deployment decisions
Task Completion
Target: >90% without help
Completed without external assistance
Replace compliance metrics (% policies signed) with design metrics (% people who chose to use them).
Design thinking applied to governance is not a metaphor. It is a methodology. Empathize with your users. Define the real problem. Ideate solutions that reduce friction. Prototype before deploying. Test with real metrics. The organizations that treat governance as a design discipline will outperform those that treat it as a compliance exercise.
The Invisible Infrastructure Vision
The best infrastructure disappears
Infrastructure is by definition invisible — part of the background for other kinds of work. Plumbing, electricity, road systems. You do not think about them when they work; you only notice them when they fail. In minimalist architecture, the goal is to seamlessly integrate complex building systems so they disappear from conscious perception. AI itself is becoming invisible infrastructure — like electricity and the internet before it, shifting from tool to substrate. Governance should follow the same trajectory: from visible compliance activity to invisible infrastructure that enables responsible AI development.
What invisible governance looks like
Policy as code: Governance rules encoded in software, executed automatically in deployment pipelines. No human needs to remember to check a box; the system checks itself. Contextual guidance: Risk-relevant information surfaced at the moment of decision, inside the tools practitioners already use — not in a separate governance portal they must remember to visit. Automated risk scoring: Model risk assessments generated from metadata, training data characteristics, and deployment context — not from a manual questionnaire practitioners fill out from memory. Embedded approval workflows: Review and approval processes triggered automatically by risk thresholds, routed to the right reviewers with the right context.
AWS's governance by design framework demonstrates this at enterprise scale: organizations achieving strong results establish a governance-by-design mindset from the start, treating AI risk management as foundational rather than a compliance checkbox. By embedding governance into the development process itself, organizations scale AI initiatives more confidently and securely. One financial services implementation used automated policy-as-code at the enterprise level, data policies at the business level, and individual model risk thresholds at the solution level. Governance was present everywhere and visible nowhere.
The Invisible Infrastructure
The best governance disappears into the work it enables
Like plumbing and electricity, the best governance is infrastructure you only notice when it fails.
The behavioral design layer
Nudge theory — the insight from Thaler and Sunstein that the way choices are presented significantly influences decisions — transforms governance design. Over 200 behavioral insight units now operate globally, coordinating via the OECD Behavioral Insights Network. Over 80% of OECD governments use behavioral techniques in at least one major policy area. The governance application is direct: every default, every form field, every approval flow nudges practitioners toward or away from responsible behavior. The design of governance processes IS a choice architecture.
The most powerful design choice is the default. Organ donation research shows that countries with opt-out defaults have dramatically higher participation rates — Austria at 99.98% compared with Germany at 12% under opt-in. The same principle applies: opt-out governance (embedded, automatic, default-on) achieves dramatically higher adoption than opt-in governance (separate portal, manual process, requires initiative). Digital nudging works on the principle that every interface design is a kind of choice architecture with behavioral effects whether intended or not. Governance tooling should leverage this deliberately: surface risk warnings at decision points, pre-populate impact assessment fields with context-specific information, make the responsible choice the easy choice.
“Good design, when it's done well, becomes invisible.”
The vision is not governance that does not exist. It is governance so well-designed that it operates without friction, without interruption, without conscious effort. Like the best infrastructure, the best governance disappears into the work it enables.
Governance Design in Practice
Anthropic's RSP: Design Iteration at Scale
Anthropic's Responsible Scaling Policy is perhaps the clearest example of design thinking applied to governance. Five versions in 30 months: September 2023 (v1.0), October 2024 (v2.0), March 2025 (v2.1), May 2025 (v2.2, activating ASL-3 safeguards), and February 2026 (v3.0) introducing Frontier Safety Roadmaps and Risk Reports. This is not a static policy. It is a product that ships, learns, and refines — Apple's product methodology applied to governance. And Anthropic's reflections are candid about what worked and what did not. That honesty (Principle 6) is itself a design choice.
Estonia: Governance as Architecture
Estonia's Data and AI White Paper 2024-2030 embodies Principle 5 (unobtrusive) at national scale. The entire government runs on X-Road, a shared data exchange layer where governance is literally the architecture — not a separate compliance activity applied on top. Citizens interact with government services; governance operates invisibly in the infrastructure that enables those interactions. Estonia consistently ranks among the top digital governments globally not because they have more governance, but because their governance is better designed. It is embedded, automated, and invisible to the people it serves.
Singapore: Governance That Anticipates
Singapore's Model AI Governance Framework, developed with over 70 global organizations, demonstrates Principle 1 (innovative) through its Agentic AI extension launched in January 2026. While most governance frameworks are still catching up with generative AI, Singapore is already governing agentic systems. The framework is designed for interoperability — aligned with OECD, EU, UK, and US assurance models. This is governance designed like a platform: extensible, compatible, anticipatory.
GOV.UK: Do the Hard Work to Make It Simple
The UK Government's 10 design principles are a direct model for governance design. "Start with user needs" maps to designing governance for the practitioner, not the regulator. "Do less" maps to resisting the urge to add governance elements. "Design with data" maps to measuring governance effectiveness empirically. "Iterate, then iterate again" maps to launching governance as beta and improving based on feedback. The GOV.UK Design System itself demonstrates how design patterns — reusable, tested, accessible components — can standardize user experience across a complex ecosystem. Governance should follow the same pattern: reusable governance components, tested with practitioners, accessible to every role.
Four examples, four design principles in action: Anthropic iterates (Principle 7). Estonia embeds (Principle 5). Singapore anticipates (Principle 1). GOV.UK simplifies (Principle 4). None of them designed governance by committee. All of them designed governance by design.
The Design Metrics That Matter
From compliance metrics to design metrics
Traditional governance metrics measure existence: percentage of models reviewed, percentage of policies signed, number of governance meetings held. These are vanity metrics. They tell you governance exists. They do not tell you governance works. Design metrics measure effectiveness:
- Time to comply: How long does it take a practitioner to complete a governance requirement? Target: minutes, not days. If it takes a developer four hours to complete a risk assessment, the assessment is badly designed.
- Adoption rate: What percentage of teams voluntarily use governance tools versus being forced? Target: above 80% voluntary adoption. Below that, your governance has a UX problem.
- Bypass rate: How often do practitioners circumvent or shortcut governance? Target: below 5%. If higher, it is a usability failure, not a compliance failure.
- Practitioner satisfaction (Governance NPS): Do practitioners view governance as helpful or obstructive? Apply the Net Promoter Score: "Would you recommend this governance process to a colleague?" If the answer is no, redesign.
- Decision impact rate: How often does governance actually change a deployment decision? Target: above 20%. If lower, governance is rubber-stamping, not governing.
- Task completion rate: What percentage of practitioners can complete governance requirements without help? Target: above 90%. Below that, the process is not self-explanatory (Principle 4 failure).
Organizations without formal governance experience application sprawl, security violations, and compliance breaches at rates 3-4x higher than those with established governance. The governance works. The question is whether the governance is designed to work well. Nielsen Norman Group's State of UX 2026 found that User Experience will replace Model Intelligence as the primary sustainable differentiator. The same applies to governance: governance differentiation will move from "most comprehensive framework" to "best governance experience."
Measure governance like a product: adoption, satisfaction, completion, impact. If your only governance metrics are "percentage of policies published" and "number of reviews completed," you are measuring governance theatre, not governance effectiveness.
Paved Roads and Golden Paths: Governance as Enablement
The most powerful governance design insight of the past decade comes not from governance at all, but from platform engineering. Netflix uses the term 'paved road' for opinionated, supported paths that guide developers from idea to production. The paved road includes recommended tools, configurations, and practices — developers can go off-road, but the road is so well-maintained that most choose to stay on it voluntarily. Spotify's 'golden path' emerged when autonomous team culture led to "rumour-driven development" — too many choices, no clear guidance.
The governance translation is direct. Having a curated list of recommended tools enhances developer productivity and consistency, and makes it easier to enforce governance policies and ensure compliance. Instead of requiring developers to stop and get approval (a roadblock), embed governance checks into the tools they already use (a paved road). Instead of publishing a separate AI ethics policy portal (a detour), surface relevant guidance inside the IDE, the CI/CD pipeline, the model registry (a golden path). The question shifts from "did you follow the governance process?" to "why would you leave the paved road?"
Deloitte's State of AI 2026 confirms that organizations moving from experimentation to enterprise-scale deployment are finding that governance design is critical. PwC's 2026 predictions note that enterprises where senior leadership actively shapes governance achieve significantly greater business value. And KDnuggets' 2026 analysis identifies the dominant trend: "Governance by Design" approaches that embed responsibility directly into AI architectures rather than applying governance as external oversight.
Only one in five companies has a mature model for governance of autonomous AI agents. Agentic AI is the ultimate test of governance design. If governance requires human review at every decision point, it fundamentally cannot govern systems that make thousands of autonomous decisions per hour. Governance must be designed into the agent's architecture, not applied externally. The paved road for agentic AI is not a review process — it is the guardrails embedded in the agent's decision framework.
The paved road principle: make the governed path the easiest path. If compliance requires extra steps, people will skip them. If compliance is the default, people will follow it. Design the choice architecture so the responsible choice is the convenient choice.
What This Means for a 50-Person Company
If you are the CTO of a food delivery startup, you do not need Dieter Rams. You need governance that works. Here is the startup version of the 10 principles, distilled to four actions:
- Audit your governance UX. Time how long it takes an engineer to complete your risk assessment. If it takes more than 15 minutes, redesign it. Ask your team: "If our governance were an app, would you use it?" If not, fix the design before adding more governance.
- Embed, do not bolt on. Put governance checks in your CI/CD pipeline, not in a separate portal. Pre-populate forms with metadata. Automate what you can. The paved road for a startup is a Slack bot that asks three risk questions before deployment, not a 20-page impact assessment template borrowed from a bank.
- Measure bypass, not compliance. If your engineers are skipping governance steps, that is your most important signal. Do not enforce harder. Design better. Track how often governance actually changes a decision. If the answer is never, your governance is decoration.
- Start with five and subtract. Write five governance requirements. Live with them for 30 days. Then ask: which of these actually reduced risk or improved a decision? Remove the ones that did not. Repeat quarterly. The MVG framework provides the 90-day implementation path designed for resource-constrained teams.
For the board member: ask your governance team one question at the next meeting: "What is our governance bypass rate?" If they cannot answer, they are measuring compliance, not effectiveness. If they can answer and the rate is above 10%, your governance has a design problem. Either response is valuable. The question itself is the practice. The A3 ROI analysis provides the business case your CFO needs. The A12 failure patterns show what happens when governance is ignored. But start with the design question: is your governance designed for the people who use it?
Your food delivery app's governance does not need to be comprehensive. It needs to be well-designed. A three-question risk check that engineers actually complete is more effective than a 40-page impact assessment that nobody reads. Less, but better.
Less, But Better
This article has made a single argument: most AI governance fails not because it is wrong, but because it is badly designed. The 64-point gap between AI adoption (78%) and governance adoption (14%) is a design gap. The 56% of practitioner time consumed by manual governance processes is a design failure. The organizations that close these gaps will not be the ones with the most governance. They will be the ones with the best-designed governance.
The synthesis of Rams, Jobs, and Ive applied to governance produces three standards:
- As innovative as the technology it governs. Governance that anticipates agentic AI, multimodal systems, and autonomous agents — not governance still catching up with chatbots.
- As useful as the tools practitioners already love. Governance that saves time, surfaces insight, and makes risk decisions easier — not governance that adds friction, creates paperwork, and slows deployment.
- As invisible as the infrastructure we take for granted. Governance embedded in workflows, executed automatically, surfaced contextually — not governance that lives in a portal nobody visits.
This connects to the governance ecosystem: B13 establishes that well-designed governance honestly acknowledges its limits. B14 shows that well-designed governance cannot be theatre — because design is measured by what it does, not how it looks. B15 demonstrates that well-designed governance makes the hardest decisions, including when to stop. A11 Data Governance applies the same design principles to the foundation layer. A14 Epistemic Humility provides the philosophical foundation for honest governance design. Together, they form the most comprehensive publicly available analysis of AI governance as a discipline — and this article, which argues that governance is a design discipline, is what connects them.
The governance profession is growing rapidly, but the focus remains on policy and compliance rather than experience design. Organizations hire governance professionals with legal and policy backgrounds, not design backgrounds. Governance programs are designed by compliance experts, not experience designers. That is the equivalent of having engineers design the iPhone interface without Jony Ive. The gap is not in governance expertise. It is in governance design expertise. The organizations that hire for both will build governance people actually use.
“Simplicity is not the absence of clutter — that's a consequence of simplicity. Simplicity is somehow essentially describing the purpose and place of an object and product.”
The final line belongs to design. Jony Ive: "Good design, when it's done well, becomes invisible." Good governance, when it's done well, becomes inevitable — practitioners choose it because it is the best-designed path available. The standard is not compliance. The standard is craft. The standard is Apple. And governance deserves nothing less.
Download: AI Governance Design Audit Worksheet
Score your governance program against all 10 design principles, benchmark your UX metrics (adoption rate, bypass rate, time-to-comply, practitioner NPS), and build a 90-day redesign sprint plan. Includes the Friction vs. Flow diagnostic and paved road implementation checklist.
Enter your email to get instant access — you'll also receive the weekly newsletter.
Free. No spam. Unsubscribe anytime.
Get Weekly Thinking
Join 2,500+ AI leaders who start their week with original insights.

Senior AI strategist helping leaders make AI real across four continents. Forbes Technology Council member, IEEE Senior Member.