Key Takeaways
- →75% have AI policies but only 18% have councils with decision authority
- →Ethics boards without enforcement power are governance theatre by definition
- →Performative governance creates legal exposure, not legal protection
- →Five specific shifts convert appearance into substance
- →Google dissolved its AI ethics board in nine days — the archetype of theatre
The $8.6 billion ritual that makes you feel safe without making you safer
Security Theatre, Meet AI Governance Theatre
The Transportation Security Administration spends $8.6 billion per year on airport security. In covert "red team" tests by the DHS Inspector General, TSA screeners failed to detect weapons and mock explosives in approximately 95% of attempts. Passengers wait in long lines, remove their shoes, surrender their water bottles, and walk through body scanners. Everyone feels safer. Almost nobody is safer. Bruce Schneier coined a term for this in 2003: security theatre — visible rituals that create the feeling of improved security while doing little or nothing to achieve it.
Your AI governance program might be doing the same thing. And unlike airport security, the stakes are compounding. AI-related incidents rose to 233 in 2024 — a 56.4% increase over 2023. 72% of organizations have integrated and scaled AI, but only 33% have proper responsible AI controls. That is a 39-percentage-point gap between deployment and governance. Organizations are flying AI at scale with screening that catches almost nothing.
B13 established that governance frameworks have five structural limitations no framework can overcome — pacing, opacity, boundaries, measurement, and emergence. Those limitations are architectural. They apply even to organizations that are trying to govern honestly. This article asks a harder question: what happens when organizations are not even trying? When the governance program exists to make leadership feel responsible, not to actually reduce risk? When compliance is performance, not protection?
The answer is governance theatre. And it is far more common than the governance profession wants to admit.
This article provides a taxonomy of five types of AI governance theatre, the evidence that each type is widespread, a diagnostic test for each, the quantified cost of performative governance, and a path from theatre to substance. If your governance program has never stopped a deployment, this article will explain why — and what to do about it.
The Governance Spectrum
Where does your governance fall between theatre and substance?
What Bruce Schneier Taught Us About Security Theatre
The original concept: visible rituals that create the feeling of safety without the reality
In his 2003 book Beyond Fear, Bruce Schneier defined security theatre as "the practice of implementing security measures that are considered to provide the feeling of improved security while doing little or nothing to achieve it." The measures provide no measurable security benefits, or minimal benefits that do not outweigh their cost. The TSA is the canonical example, but Schneier identified the pattern across domains: security cameras that are not monitored, identity checks that are not verified, policies that are not enforced.
Why theatre persists: it is not irrational
Schneier himself noted something critical: "Security theater scares off stupid attackers and those who just don't want to take the risk" — acknowledging that theatre has some value. It reassures the public. It signals that authorities are taking threats seriously. It deters unsophisticated adversaries. The problem is not that theatre exists. The problem is when theatre replaces substance — when organizations confuse the feeling of security with actual security. When the ritual becomes the goal, the threat goes unaddressed.
Theatre persists because it serves real organizational needs. Leadership needs to tell the board that governance is in place. Regulators need to see compliance artifacts. Customers need to hear that AI is "responsible." PR needs talking points. None of these needs are illegitimate. But none of them require governance that actually reduces risk. They can all be satisfied by governance that looks like it reduces risk. And that is the structural incentive that makes theatre the path of least resistance.
The AI governance parallel: from TSA to ethics boards
The structural parallel between security theatre and AI governance theatre is precise. The TSA screening line maps to the AI ethics review process. Visible but ineffective pat-downs map to checkbox impact assessments. A 95% failure rate in detecting real threats maps to governance programs that have never stopped a deployment. Enormous budgets with minimal risk reduction map to large governance teams with minimal behavior change. And the feeling of improved security maps directly to the feeling of responsible AI. The pattern is identical. The domain has changed. The dysfunction has not.
“Security theater scares off stupid attackers and those who just don't want to take the risk.”
The Uncomfortable Statistics
The adoption-governance chasm
The numbers are damning. EY's 2025 AI Governance Survey of 975 C-suite leaders across 21 countries found that 72% of executives say their organizations have integrated and scaled AI in most or all initiatives. Only 33% have proper protocols adhering to all facets of responsible AI. That is a 39-percentage-point gap — and it is not a maturity curve. It is structural evidence that governance is an afterthought for most organizations. 76% are using or planning to use agentic AI within a year, but only 56% are familiar with the associated risks. 88% are utilizing synthetic data generation, but only 55% are aware of the risks.
The policy-practice gap
The Pacific AI 2025 Survey reveals the funnel from appearance to substance: 75% of organizations have established AI usage policies. Only 36% have adopted a formal governance framework. Only 18% have enterprise-wide councils authorized to make decisions on responsible AI governance. That is a 57-percentage-point drop from "we have a policy" to "we have decision-making authority." Three-quarters of organizations have the performance of governance — a policy document on the intranet. Fewer than one in five have the substance — a body with authority to slow, change, or stop a deployment.
PwC's 2025 Responsible AI Survey adds another layer: nearly half of respondents said turning RAI principles into operational processes has been a challenge. 56% of executives say first-line teams — IT, engineering, data science — now lead RAI efforts. This means governance is being delegated to the teams it is supposed to oversee. It is as if the TSA asked passengers to screen themselves.
The incident trajectory
Stanford HAI's 2025 AI Index Report documents the divergence between governance investment and actual outcomes: 233 reported AI-related incidents in 2024 — a record high, representing a 56.4% increase over 2023. The number of responsible AI papers at leading conferences increased 28.8%. More papers, more principles, more programs — and more incidents. The research output is growing. The real-world harm is growing faster. This is what theatre looks like at scale.
Only 14% of CEOs believe their AI systems operate in adherence to regulations — compared to 29% of their C-suite peers. Only 28% of organizations say the CEO takes direct responsibility for AI governance oversight. Only 17% report that their board does. Only 27% have formally added AI governance to committee charters. When 83% of boards do not own AI governance and 86% of CEOs do not believe their systems comply, governance exists as organizational decoration — present in the org chart, absent from the boardroom.
A 39-percentage-point gap between AI deployment and governance controls. A 57-percentage-point drop from having a policy to having decision authority. 233 incidents rising 56% year over year despite record RAI investment. These are not growing pains. These are the statistics of an industry performing governance rather than practicing it.
A Taxonomy of AI Governance Theatre
What follows is a five-type taxonomy of governance theatre. Each type describes a specific pattern of performative governance, provides evidence that the pattern is widespread, and offers a diagnostic test you can apply to your own organization. This taxonomy is the core intellectual property of this article — and the framework that distinguishes genuine governance from its imitation.
The Five Types of Governance Theatre
Tap any type to reveal what's behind the curtain
Type 1: Ethics Board Theatre — Advisory Boards with No Authority
Google's Advanced Technology External Advisory Council (ATEAC) was announced on March 26, 2019 and dissolved on April 4, 2019 — nine days. Eight members, unpaid, scheduled to meet four times in 2019. Faced immediate controversy over membership, including the Heritage Foundation president whose anti-LGBTQ positions drew employee protests. Approximately 2,500 Google employees signed a petition demanding changes. One member resigned almost immediately, saying he "didn't believe it was the right forum." A critical assessment noted: "A role on Google's AI board was an unpaid, toothless position that cannot possibly, in four meetings over the course of a year, arrive at a clear understanding of everything Google is doing, let alone offer nuanced guidance on it."
But the ATEAC was not even the worst part. The worse part is what Google did to its internal ethics team. Timnit Gebru, co-lead of Google's Ethical AI team, was forced out in December 2020 after co-authoring a paper on the risks of large language models — the technology central to Google's business. 1,400+ Google employees and 1,900+ external supporters signed a letter of protest. Three months later, Margaret Mitchell, the other co-lead and founder of Google's Ethical AI team, was fired in February 2021. Google dismantled its external ethics board in nine days, then fired the two co-leads of its internal ethics team for doing their jobs. The AI Principles remained on the website. The principles never changed. The people who might have enforced them were gone.
This is not a Google-specific problem. Research by Jonas Schuett at the Centre for the Governance of AI found that organizations are "hesitant to give a right of veto to an advisory board largely composed of external figures." Most ethics boards operate in a purely advisory capacity — their influence "relies on good relationships with management (if collaborative) or the board of directors (if adversarial)." The ethics board "usually cannot force the board of directors to do something." HBR's analysis confirms: the committee "may or may not have the power to veto product proposals depending on how much direct business influence the committee has."
The diagnostic test: Has your ethics board ever stopped or materially changed a deployment? If the answer is no after 12+ months of operation, you have theatre. An advisory board that has never changed an outcome is a decoration, not governance.
Type 2: Policy Theatre — Beautiful Documents Nobody Reads or Enforces
IEEE researchers identified a pattern they call "Press Release Ethics" — public ethical commitments that organizations subsequently abandon when expedient for business. Companies publish policies that are "specific enough to encourage feelings of safety and comfort, but in many cases still vague and amorphous enough to allow behavior that contradicts the initial reassurance." Their expanded research, which won an IEEE Best Paper award in 2025, examined AI industry consortia and found "patterns of strategic non-compliance, exploitation of lack of enforcement measures" — concluding that "industry self-regulation through membership in these collective bodies represents reputation management rather than adherence to genuine ethical standards."
The 75%/36%/18% funnel tells the story structurally. Three-quarters of organizations have the visible artifact — a policy document. Barely a third have built the invisible infrastructure — a governance framework with processes, roles, and escalation paths. Fewer than one in five have the authority structure — a council that can actually make binding decisions. Oliver Patel, former Head of Enterprise AI at UCL, puts it bluntly: "There is plenty of compliance theatre and performative governance. AI policy and compliance work often does not adequately address how requirements can actually be implemented and operationalized in practice by technical teams."
Academic research confirms the structural gap: "A significant gap exists between the theory of AI ethics principles and the practical design of AI systems." The dominant approach to Responsible AI "tends to frame ethics as a checklist of static principles or a set of modular compliance tools." Checklists satisfy auditors. They do not stop harm. The policy exists as an organizational shield — proof that someone thought about ethics — rather than as an operational constraint that changes how technology is built and deployed.
The diagnostic test: Can any employee find your AI policy in under 60 seconds? Has the policy ever changed a decision? If your team cannot locate it or cannot cite a single instance where it altered an outcome, you have theatre.
Type 3: Audit Theatre — Checkbox Assessments That Miss Real Risks
The EU AI Act's Fundamental Rights Impact Assessment (FRIA) is one of the most visible governance mechanisms in global AI regulation. It is also structurally toothless. Critical analysis reveals: "The FRIA does not have the power to prevent the deployer from using a high-risk AI system, regardless of the risks identified." The risk classification system itself "cannot be distinguished from a political evaluation since the risks to be considered are risks to political values" — the resemblance to a formal risk analysis is "superficial." The EU's flagship AI regulation contains an impact assessment that cannot stop anything. It is a documentation requirement, not a governance mechanism.
Australia's Robodebt scheme demonstrates what happens when audit theatre meets real people. The automated debt recovery system used income averaging to generate $2 billion in debt notices to 700,000 welfare recipients — $1.73 billion in unlawfully raised debts against 433,000 people. Deaths by suicide were linked to the scheme. The Royal Commission documented that "robust public governance processes — effective 'gatekeepers' in central government departments, as well as properly resourced and independent 'referees', such as ombudsman, privacy commissioners and auditors — play an important part in stopping poorly thought-through or unlawful programmes from proceeding." All of these gatekeepers existed. None stopped Robodebt. The governance was real on paper and theatre in practice. The result was $1.73 billion in unlawful debts and a A$1.87 billion settlement — the largest class action in Australian history.
The US federal government is not immune. A GAO 2025 assessment found that "none of the sector risk assessments fully addressed the six activities that establish a foundation for effective risk assessment" and "none fully evaluated the level of risk by including measurements that reflected both the magnitude of harm and the probability of an event occurring." The institutions that set the standard for risk assessment are themselves performing audit theatre. And within organizations, teams spend 56% of their time on governance-related activities when using manual processes — more than half of AI talent focused on compliance paperwork instead of value creation or genuine risk reduction.
The diagnostic test: When was the last time an audit finding killed or materially changed a project? If your impact assessments have a 100% pass rate, you do not have a governance process — you have a rubber stamp.
Type 4: Transparency Theatre — Disclosures That Obscure More Than They Reveal
A systematic analysis of 32,000+ AI model cards found that safety-critical categories had massive gaps: interpretability was detailed in fewer than 20% of model cards and absent in over 80%. Safety evaluation, bias and fairness, and limitations were "frequently mentioned but rarely described in depth." There were 97 different section names for usage information — no standardization whatsoever. Unlike traditional scientific papers which undergo peer review, model cards have no mechanism for ensuring balanced and comprehensive documentation. The researchers identified the "charade" risk directly: "The 'Model Card' could either be a system that serves safety and the ethical use of AI, or it could turn into a charade, with companies in a race to the bottom of true transparency and top of claimed performance."
New York City's MyCity chatbot demonstrates transparency theatre by a government. NYC launched an AI chatbot on its official website to provide legal and regulatory guidance to businesses. It came with a disclaimer warning it might produce "incorrect, harmful or biased" information. Then it told landlords they could refuse Section 8 vouchers — illegal in NYC. Told businesses they could go cashless — illegal since 2020. Told employers they could take workers' tips. Said there were no regulations on scheduling changes. All wrong. All illegal. The chatbot cost approximately $600K to build and $500K in ongoing costs before being shut down as "functionally unusable." A disclaimer on an official government website is not transparency. Once government authority seals a system, users treat outputs as de facto official guidance. The disclaimer was theatre. The illegal advice was real.
At the corporate level, "AI washing" — analogous to greenwashing — centers on "technical and symbolic legitimacy, where firms aim to secure digital legitimacy — the perception that they possess advanced technological capability, data competence, and innovation leadership." Companies use "selective disclosure, emotionally resonant narratives, and visually appealing content to construct a public image that aligns with societal expectations." Stanford HAI tracked foundation model transparency scores: the average increased from 37% in October 2023 to 58% in May 2024 — progress, but still meaning major model developers fail to disclose 42% of transparency criteria. Improvement is happening. The baseline was dismal.
The diagnostic test: Could a downstream user of your model card, transparency report, or AI disclosure actually assess the risks relevant to their use case? If the answer requires specialized knowledge the typical reader does not have, transparency has become theatre.
Type 5: Consultation Theatre — Engagement That Does Not Influence Outcomes
The IEEE Best Paper research on AI industry associations found "patterns of strategic non-compliance" and "exploitation of lack of enforcement measures." Industry self-regulation through collective bodies "represents reputation management rather than adherence to genuine ethical standards." Companies join consortia, sign principles, attend meetings — then act independently of those commitments when business interests conflict. The research documents a "theatre effect" from CSR literature: "As more and more firms voluntarily disclose CSR information, a theatre effect is created, where firms that do not disclose CSR information are vulnerable to negative social evaluation." This creates a disclosure arms race where the act of disclosing becomes more important than the content disclosed.
Internal consultation theatre follows the same pattern. The "ethics review" happens after the deployment decision has already been made. Review processes can recommend changes but cannot require them. "Stakeholder engagement" sessions collect feedback, but outcomes are predetermined. The commercial momentum of a product launch makes intervention progressively harder — by the time the ethics review occurs, the engineering is complete, the marketing is scheduled, and the leadership has committed. The review becomes a ritual performed for the record, not a genuine decision point. This mirrors B13's structural limitations: governance assumes it can intervene at the right time, but in practice, the window for meaningful intervention has already closed.
The diagnostic test: Can you point to a specific instance where consultation feedback — internal or external — materially changed a deployment decision? Not cosmetically (we changed the language in the model card) but substantively (we delayed launch, changed the model, restricted the use case, or killed the project)? If you cannot, you have theatre.
The Real Cost of Theatre
Governance theatre is not free. It has quantifiable regulatory costs, measurable trust costs, documented talent costs, and a hidden cost that may be the most dangerous of all. The Liability Ledger documents how AI governance gaps compound over time. Theatre accelerates that compounding.
The Cost Ledger
Governance theatre is not free — the costs are quantifiable and compounding
FTC Operation AI Comply
5+ enforcement actions since Sep 2024
SEC AI Washing Fines
First-ever actions (Mar 2024) — signal of escalation
EU AI Act Maximum
€35M or 7% global revenue — whichever higher
Google Hypocrisy Penalty
AI Principles (2018) → fired ethics leads (2020-21)
NYC MyCity Chatbot
Gave illegal advice on official gov website
Clearview AI Settlement
30B+ photos scraped, multi-country fines
Robodebt Unlawful Debts
433,000 people. Deaths by suicide linked.
False Confidence Gap
59% confident in visibility; 36% have a policy
Talent Drain
Best ethics talent leaves performative programs
Theatre is not free
The regulatory cost is visible. The trust erosion and false confidence may be orders of magnitude larger.
Regulatory cost: the enforcement era has begun
The FTC launched "Operation AI Comply" in September 2024 with five simultaneous enforcement actions targeting AI washing. DoNotPay paid $193,000 for claiming to be the "world's first robot lawyer." Click Profit faced $20M+ in judgments for baseless claims about AI-driven passive income. Enforcement has continued into 2026, with actions against IntelliVision Technologies, Air AI, and Growth Cave — signaling that even as the administration shifts, agencies continue scrutiny of companies making unsupported claims about AI.
The SEC issued its first-ever AI washing enforcement actions in March 2024 against two investment advisers (Delphia and Global Predictions), levying $400,000 in civil penalties for making "false or misleading statements about the use of AI in their operations." These are the first, not the last. The EU AI Act's penalty framework escalates further: potential fines up to 35 million euros or 7% of global annual revenue, whichever is higher. Full high-risk system obligations apply August 2026. Organizations relying on governance theatre may find their performative compliance fails to meet the Act's substantive requirements.
Trust cost: the hypocrisy penalty
When governance is exposed as performative, the reputational damage exceeds that of having no governance program at all. Google's AI Principles — published in 2018, prominently displayed on the website — followed by the Gebru and Mitchell firings created a narrative of institutional hypocrisy that persists years later. NYC's MyCity chatbot was worse because it carried government authority — the gap between official branding and actual harm amplified the trust violation. Research confirms: "AI's practical applications must align with its promises to prevent trust erosion, as gaps between expectations and real-world performance fuel consumer skepticism." Ethics washing "occurs when companies overstate their capabilities in responsible AI, creating an uneven playing field where genuine efforts are discouraged or overshadowed by exaggerated claims." The hypocrisy penalty is worse than having said nothing at all.
Talent cost: the best people leave theatre
AI ethics professionals increasingly leave organizations where their work is performative. The Gebru and Mitchell firings sent a chilling signal across the entire AI ethics field: do real ethics work and risk your career. Best governance talent gravitates toward organizations with substantive programs — Anthropic, Microsoft's Office of Responsible AI — rather than advisory-only theatre. When your governance program is performative, you cannot attract or retain the people who would make it substantive. The theatre becomes self-reinforcing.
The hidden cost: false confidence
This connects directly to B13's thesis and is arguably the most dangerous cost. Theatre creates a more dangerous situation than no governance at all. Organizations that believe they are governing responsibly are less likely to invest in real risk reduction. Leadership confidence in governance programs that have never stopped a deployment creates blind spots. 59% of organizations report being 'very confident' in their visibility into AI tools, but only 36% actually have an AI policy in place. The gap between perceived and actual governance maturity is itself a risk factor. The organization that knows it has no governance is at least honest about its exposure. The organization performing governance believes it is protected when it is not. That false confidence is the most expensive form of theatre.
FTC: $20M+ in judgments. SEC: $400K in first-ever AI washing fines. EU AI Act: up to 7% of global revenue. But the regulatory cost is the visible cost. The trust erosion, the talent drain, and the false confidence may be orders of magnitude more expensive. Theatre is not free. It never was.
From Theatre to Substance: What Genuine Governance Looks Like
The substance test
The diagnostic is simple and uncomfortable: governance is real if and only if it has the authority and willingness to slow, change, or stop a deployment. If your governance program has never exercised that authority — if it has reviewed 50 systems and approved all 50, if every impact assessment passes, if every ethics review results in "proceed with minor recommendations" — then you do not have governance. You have a process that generates documentation. The substance test separates governance from theatre with a single question: has this ever stopped something?
The Substance Test
Six diagnostic questions to determine if your governance is theatre or substance
Has your governance program ever stopped or materially changed a deployment?
Does your ethics/governance body have the authority to veto a product decision?
Can any employee find your AI policy in under 60 seconds?
When was the last time an audit finding killed or changed a project?
Could a downstream user of your transparency docs actually assess risks?
Can you cite a specific instance where consultation changed an outcome?
Anthropic's RSP as a case study in substance
Anthropic's Responsible Scaling Policy v3.0 is the most cited example of substantive corporate AI governance, and understanding why illuminates the difference between theatre and substance. The RSP defines specific AI Safety Level Standards (ASL Standards) — graduated sets of safety and security measures that become more stringent as model capabilities increase. This is not advisory. It is structural. Routine model evaluations based on capability thresholds determine whether current safeguards remain appropriate. If models cross capability thresholds, new safeguards are required before deployment — not recommended, required.
The RSP includes self-binding commitments: once models cross certain capability thresholds, Anthropic must develop "an affirmative case identifying the most immediate and relevant misalignment risks from models pursuing misaligned goals and explaining how they have mitigated them." The assessment methodology includes external expert feedback, and the policy's version history demonstrates genuine iteration — v1.0 (September 2023), v2.0 (October 2024), v3.0 (February 2026) — evolving based on what works and what does not. The principle: proportional protection — safeguards that scale with potential risks. The governance scales WITH the capability, rather than being a static document that capability outgrows.
Microsoft's evolution from principles to operations
Microsoft's Office of Responsible AI (ORA) was established in 2019 with five functions: setting internal policies, defining governance structures, providing adoption resources, reviewing sensitive use cases, and shaping public policy. The Responsible AI Council brings together research, policy, and engineering teams with senior business partners "who are accountable for implementation." The Sensitive Uses program provides pre-deployment review — "reviews often culminating in requirements that go beyond the Responsible AI Standard." Microsoft learned that "a single team or discipline tasked with responsible AI was not going to meet their objectives." They publish annual Responsible AI Transparency Reports, including the 2025 report. This is governance that evolved from principles on a website to operational enforcement across the organization.
Five shifts from theatre to substance
The path from theatre to substance requires five structural shifts. Each shift addresses one type of governance theatre directly. These are not incremental improvements. They are architectural changes to how governance operates.
Five Structural Shifts
From theatre to substance — the architectural changes governance requires
Ethics board advises. Management ignores.
Governance body can slow, change, or stop.
Policy exists in SharePoint. Nobody reads it.
Requirements engineered into production systems.
One-time impact assessment at launch.
Continuous monitoring with intervention triggers.
Model card satisfies compliance team.
Transparency enables downstream risk decisions.
Feedback collected. Outcomes predetermined.
Consultation materially changes deployment.
- Shift 1: From advisory to authoritative. Give governance veto power. An ethics board that can advise but never veto is theatre. An ethics board that can slow, change, or stop a deployment is substance. This does not mean governance must stop everything — it means governance must have the authority to stop something, and must exercise that authority when warranted. Anthropic's ASL Standards demonstrate this: capability thresholds trigger mandatory safeguards, not optional recommendations.
- Shift 2: From documentation to enforcement. Policies must change behavior, not just exist as artifacts. Oliver Patel's prescription: "ensuring that key requirements from your policies and standards are engineered into your production AI models, systems, and platforms." A policy that lives in a SharePoint folder is documentation. A policy encoded in deployment pipelines, model registries, and CI/CD gates is enforcement.
- Shift 3: From checkbox to continuous. One-time impact assessments at launch are theatre. Continuous monitoring with defined intervention triggers is substance. AI systems drift. Contexts change. User populations shift. A governance assessment done once is a snapshot of a moment that no longer exists. The A11 data governance foundation addresses this: governance must be continuous because the systems it governs are continuous.
- Shift 4: From disclosure to usability. Model cards that satisfy compliance requirements are theatre. Transparency artifacts that enable downstream risk assessment are substance. If a typical deployer cannot use your transparency documentation to make an informed decision about their use case, your transparency serves your compliance team, not your users.
- Shift 5: From input to influence. Consultation processes that collect feedback are theatre. Engagement processes where feedback materially changes outcomes are substance. The test is not whether you asked. The test is whether the answer changed anything.
These five shifts map directly to the five types of theatre: advisory boards become authoritative (Ethics Board Theatre → Shift 1), policy documents become enforcement mechanisms (Policy Theatre → Shift 2), checkbox audits become continuous monitoring (Audit Theatre → Shift 3), disclosure becomes usable transparency (Transparency Theatre → Shift 4), and consultation becomes genuine influence (Consultation Theatre → Shift 5).
“Governance is real if and only if it has the authority and willingness to slow, change, or stop a deployment.”
Companies with Principles That Still Failed
The case studies are not edge cases. They are the norm. Every organization below had published AI ethics principles, responsible AI commitments, or governance documentation at the time of its failure. The principles did not prevent the harm. That pattern — principles present, harm proceeding — is the signature of governance theatre.
- Amazon AI Hiring Tool (2014-2015): Built an algorithm to automate resume screening. Trained on 10 years of predominantly male resumes, it learned to penalize resumes containing "women's" or names of all-women's colleges. Amazon tried to correct the bias but "lost confidence that the program was indeed gender neutral" and abandoned it. Amazon had published AI/ML principles. The tool was built despite them.
- Google Gemini Image Generation (February 2024): Produced historically inaccurate and racially insensitive images — generating images of Black and Native American figures in colonial attire when prompted for "portrait of a Founding Father." Google has published AI Principles since 2018, has a responsible AI team, and publishes annual responsible AI reports. None of it prevented the incident.
- Clearview AI: Scraped 30+ billion photos from social media without consent since 2017. Fined by Italy (20M euros), UK (7.5M pounds), France. $51.75M ACLU settlement in 2025. Clearview operated in an industry where companies routinely publish privacy and ethics principles.
- Hiring Bias Across the Industry: iTutorGroup paid $365,000 after its software automatically rejected female applicants 55+ and male applicants 60+. SafeRent paid $2M+ for disparate impact on Black and Hispanic applicants. University of Washington research found massive text embedding models favored white-associated names in 85.1% of resume screening cases. Published principles stopped none of this.
- UK DWP Fraud Detection: Disproportionately targeted individuals based on age, disability, marital status, and nationality. Operated within a governance framework that failed to prevent discriminatory targeting.
These are not organizations without governance. These are organizations with governance — published principles, ethics teams, review processes, compliance documentation. The governance existed. It was theatre. And the failure patterns repeat because the structural incentives for theatre remain stronger than the structural incentives for substance.
The Honest Governance Position
This article is the second in a trilogy. B13 established that governance frameworks have structural limitations — five architectural constraints no framework can overcome. This article, B14, asks: given those limitations, are organizations even trying to work within them? The evidence says most are not. The governance gap is not just structural — it is performative. Most organizations are not failing at governance. They are performing governance while failing at risk reduction.
The trilogy arc forms a complete argument for honest governance:
- B13: The Limits of AI Governance Frameworks — Governance frameworks have five structural limitations that honesty, not denial, must address. The pacing problem, the opacity problem, the boundary problem, the measurement problem, and the emergence problem are architectural features of the relationship between governance and AI. Even the best governance has limits.
- B14: AI Governance Theatre (this article) — Most organizations are not even working within those honest limits. They are performing governance instead of practicing it. The five types of theatre — ethics board, policy, audit, transparency, and consultation — each have diagnostic tests and paths to substance.
- B15: When Should You Stop? — If governance is real, not theatre, what is the hardest decision it enables? When governance limitations compound to the point where an AI deployment's risks exceed its benefits, honest governance says stop. Theatre never stops anything. Substance sometimes must.
The synthesis: honest governance admits its limits (B13), ensures it is not performing (B14), and makes the hardest calls (B15). The ROI of AI Governance makes the business case. The Trust Premium quantifies the market value of genuine governance. The MVG framework provides the structural baseline. And A14 Epistemic Humility provides the philosophical foundation for all of it: govern honestly about what you know, what you do not know, and what you cannot know.
“The most dangerous governance program is the one everyone believes is working.”
If you have read this far and recognized your own organization in the taxonomy, that recognition is not failure. It is the beginning of substance. The first step from theatre to genuine governance is the admission that what you have been doing is not working. The diagnostic tests are uncomfortable by design. The five shifts are structural by necessity. And the honest governance position — pro-governance, critical of performative governance, constructive about the path forward — is the only position worth holding.
For the board member: ask your governance team one question at your next meeting — "When was the last time our governance program stopped or materially changed a deployment?" If they cannot answer, you now know what this article is about. For the governance practitioner: apply the five diagnostic tests to your own program. Count how many you pass. Then start the five shifts, one at a time, beginning with the one your organization is most ready to hear. The path from theatre to substance is not a revolution. It is a sequence of honest conversations that lead to structural change.
Download: AI Governance Theatre Diagnostic Worksheet
Get the complete governance theatre diagnostic: the five-type taxonomy with scoring rubrics, the six-question substance test, the five shifts implementation checklist, and a 90-day remediation planner — ready to print or save as PDF.
Enter your email to get instant access — you'll also receive the weekly newsletter.
Free. No spam. Unsubscribe anytime.
Get Weekly Thinking
Join 2,500+ AI leaders who start their week with original insights.

Senior AI strategist helping leaders make AI real across four continents. Forbes Technology Council member, IEEE Senior Member.