Key Takeaways
- →Community trust is the only non-replicable competitive advantage in AI
- →Internal governance protects from risks you anticipate; community governance protects from risks you can’t
- →The Solidarity Moat compounds over decades — everything else decays
- →Participation without power is performance, not governance
- →A co-design workshop costs under $5K and surfaces risks the internal team misses
Last year I reviewed a Series B fintech startup's AI governance programme. On paper, it was the most thorough I'd ever seen. Rigorous bias testing across twelve protected attributes. An internal ethics board that met fortnightly. Explainability reports generated for every lending decision. Model cards, data lineage documentation, quarterly audits — the lot. I told the founder it was genuinely impressive work.
Three months later, a community advocacy group in Atlanta published a report. The startup's small business lending product was systematically disadvantaging minority-owned businesses in five metropolitan areas. Not through any single discriminatory variable — the model was technically fair by every metric the team measured — but through a web of proxy signals the internal team had never thought to test: business address density, banking relationship length, digital footprint patterns. Signals that correlated tightly with race and neighbourhood history, but looked perfectly neutral on a spreadsheet.
The startup's internal ethics board had seven members. All had compliance or engineering backgrounds. Not one came from the communities the product served. Not one had run a small business in those neighbourhoods. The governance was architecturally sound and experientially hollow. They'd built a fortress with no windows.
I spent the next four months helping that team rebuild. Not their models — those were fine. Their governance structure. We brought community leaders into the process. We created mechanisms for the people affected by lending decisions to contest and shape those decisions. And something remarkable happened: the product got better. Not just fairer — more accurate, more commercially successful, more trusted by the very regulators who'd started asking questions. That experience crystallised something I'd been circling for years.
The hardest governance problem isn't technical. It's epistemic. The people building AI systems cannot see what the people affected by those systems see. No amount of internal rigour compensates for that blind spot.
This is Part 4 of the Responsible AI Playbook for Founders — the final instalment. Part 1 laid down the ten core principles. Part 2 operationalised them into a five-layer governance stack. Part 3 showed how to prototype responsibly through design-led strategy. This article pushes beyond internal governance entirely. It asks the question that most founders avoid: who gets to participate in governing your AI?
The Limits of Internal Governance
Start with the numbers. YouGov's December 2025 survey found that only 5% of Americans trust AI "a lot." Five per cent. That's not a trust deficit — it's a trust vacuum. Meanwhile, Relyance AI's 2024 consumer survey revealed that 82% of consumers consider AI-driven data loss-of-control a serious personal threat. And KPMG's generative AI consumer trust study found that 83% of respondents believe technology companies bear primary responsibility for ensuring AI is used ethically.
Let that sit for a moment. The public overwhelmingly holds companies responsible for AI ethics. And the public overwhelmingly does not trust those same companies to deliver. That's not a gap you close with better documentation.
Here's the paradox I keep encountering in advisory work: organisations with mature internal governance don't automatically earn higher trust from the communities they serve. Deloitte's 2025 board survey found that while AI has surged to the top of board agendas, most directors still lack the knowledge to provide effective oversight — and almost none have incorporated external community perspectives into their governance frameworks. The governance is inward-facing. The trust problem is outward-facing. These are different problems requiring different architectures.
Internal governance protects you from the risks you can anticipate. Community governance protects you from the risks you can't. The fintech startup I described had anticipated every quantifiable bias vector. What they couldn't anticipate — because no one in the room had lived it — was the compound effect of proxy signals on communities with specific histories. That's not a failure of rigour. It's a failure of perspective.
The Solidarity Moat
I've spent a lot of time thinking about competitive advantage in AI. The traditional moats — proprietary data, technical talent, computational scale — are eroding faster than most founders realise. Foundation models commoditise technical advantage within a release cycle. Synthetic data and open datasets are shrinking data moats every quarter. And talent? It disperses the moment the field matures enough to produce a second wave of startups. None of these advantages are permanent.
But there's one advantage I've never seen commoditised: community trust. I call it the Solidarity Moat — the competitive position you build when the communities affected by your AI actively choose to support your product because they helped shape it. Not because your marketing convinced them. Not because you're the cheapest option. Because they have genuine ownership in how the system works.
The Solidarity Moat can't be copied. A competitor can replicate your model architecture in weeks. They can poach your engineers in months. They can assemble a comparable dataset in a year. But they cannot replicate the trust you've earned by spending eighteen months co-designing your product with community advisory boards, implementing their feedback, and giving them real influence over how decisions are made. Trust is the only asset that requires time and cannot be bought.
The matrix above maps two dimensions: internal governance maturity (your policies, processes, and controls) against community trust (whether affected populations actively endorse your product). Four quadrants emerge. The bottom-left — weak governance, low trust — is where most early-stage AI sits. The top-left — strong governance, low trust — is where that fintech startup was. Technically governed, community-blind. Most enterprise AI lives here. The bottom-right — weak governance, high trust — is unstable: community goodwill without operational rigour collapses at the first incident.
The top-right quadrant is the Solidarity Moat. Strong internal governance and deep community trust. Every company I've worked with that occupies this quadrant outperforms its peers on retention, regulatory outcomes, and long-term revenue. It's the hardest position to reach. It's also the only one that's defensible.
“Your technical moat has a half-life measured in months. Your data moat has a half-life measured in quarters. Your solidarity moat has a half-life measured in decades — if you maintain it. Community trust compounds. Everything else decays.”
The Governance Gravity Model
If the Solidarity Moat is the what, the Governance Gravity Model is the how. Think of governance power like a gravity well. In most organisations, all decision-making mass sits at the centre: the internal team controls every aspect of how AI is built, tested, deployed, and monitored. Everything orbits the company.
Sustainable governance requires distributing that mass outward. Not abandoning internal control — you still need a strong core — but creating stable orbits where external stakeholders hold real gravitational pull. The closer to the centre, the more direct control. But a system where all mass is concentrated at the centre isn't stable. It collapses into itself. You need objects in orbit to create a system that sustains.
The Governance Gravity Model has three orbits, each representing a different degree of distributed decision-making power.
The Three Orbits of Governance
Select an orbit to explore its mechanisms, examples, and requirements
This is traditional governance — your policies, risk tiers, bias testing, model monitoring, and audit infrastructure. It's the foundation described in Part 2 of this series. Every organisation needs this. But it's the starting point, not the destination. Mechanisms: ethics boards, AI risk committees, automated testing pipelines, model cards, incident response plans. Decision authority: fully internal. The team decides, the team enforces. Limitation: bounded by the team's own experience and assumptions.
Most startups operate entirely within the Inner Orbit. The goal isn't to leap to the Outer Orbit overnight — that's neither practical nor credible. It's to begin the deliberate, staged migration of governance mass outward. Each orbit you reach creates a more stable system. Each orbit you skip creates fragility.
Practical Steps for Startup-Led Participatory Governance
Theory means nothing without execution. What follows is the implementation sequence I use with founders who want to move beyond internal governance. Four steps, each building on the last. You can start the first one this month.
Step 1: Co-Design Workshops
The lightest-weight, highest-impact first move. Before your next major feature release or model update, run a structured workshop with 8–15 people from the communities your AI affects. Not a focus group — a co-design session where participants work alongside your team to identify blind spots, test assumptions, and surface risks you haven't considered.
The format matters. Don't present a finished product and ask for reactions. Present the problem you're solving, the data you're using, and the trade-offs you're making. Let participants interrogate the decisions. I've run these workshops with healthcare startups, lending platforms, and hiring tools. In every case, participants identified at least three significant risk vectors the internal team had missed. In the fintech case I opened with, the very first workshop surfaced the proxy-signal problem within ninety minutes.
Cost: minimal. Two half-days of preparation, one full-day workshop, and a structured debrief. Participant compensation of $150–300 per person. Total cost for a meaningful co-design session: under $5,000. The information you gain is worth orders of magnitude more.
Step 2: Community Advisory Boards
Once you've validated the co-design approach, formalise it. Establish a community advisory board (CAB) with 5–9 members drawn from affected populations, domain experts, and civil society organisations. Give them a charter, quarterly meetings, and — critically — a budget and direct access to your product leadership.
Woebot Health provides a strong model here. Their clinical research programme integrates patient perspectives, therapist feedback, and academic review into product governance. The advisory structure isn't decorative — it shapes product decisions and is cited in their published research. The OECD's 2025 analysis of participatory governance tools found that AI-enabled civic participation can reach broader demographics, but warned that without intentional design, 38% of populations risk exclusion from digital participation processes. Advisory boards need to be designed for inclusion, not just assembled for optics.
Step 3: Feedback and Redress Mechanisms
People affected by your AI's decisions need clear, accessible channels to contest those decisions — and to see that their contestation leads to action. This goes beyond a customer support ticket. Build structured mechanisms for algorithmic contestation: a process where someone who believes they've been unfairly treated by your AI can trigger a human review, receive an explanation, and see the outcome documented.
The Brennan Center's research on AI and participatory democracy makes a crucial point: meaningful participation requires not just input channels but responsive channels. If feedback disappears into a queue and nothing visibly changes, trust erodes faster than if the channel didn't exist at all. Every contestation should generate a response within a defined timeframe. Aggregate contestation data should feed directly into your model monitoring pipeline. Individual complaints are support tickets. Patterns in complaints are governance signals.
Step 4: Strategic Partnerships
No startup can build participatory governance alone. The expertise sits in academic institutions, NGOs, and community-based organisations (CBOs) that have spent decades building trust with the populations you're trying to serve. Partner with them deliberately.
Stanford HAI's AI Index tracks the growing body of research on participatory AI governance, providing frameworks that startups can adapt. People Powered's digital participation research offers practical toolkits for structuring community input at scale. Academic partnerships provide methodological rigour. CBO partnerships provide community access and trust that would take you years to build independently. The key: these must be genuine partnerships with shared decision-making, not extractive consulting arrangements where you borrow credibility.
Participatory Governance Implementation Sequence
A staged approach from first workshop to sustained community governance
Month 1–2: First Co-Design Workshop
Identify affected communities, recruit 8–15 participants, run a structured co-design session on your highest-impact AI feature. Document blind spots surfaced. Estimated cost: under $5,000.
Month 3–5: Community Advisory Board
Formalise a 5–9 member advisory board with a charter, quarterly cadence, and direct access to product leadership. Include affected users, domain experts, and civil society representatives. Budget: $15K–30K annually.
Month 4–7: Feedback and Redress Channels
Build structured contestation mechanisms: clear process for challenging AI decisions, defined response timeframes, human review triggers. Feed aggregate contestation data into model monitoring.
Month 6–12: Strategic Partnerships
Establish formal partnerships with 1–2 academic institutions and 1–2 community-based organisations. Shared research agreements, co-published findings, joint governance review cycles.
The Trust Dividend: Evidence That Community Governance Works
The sceptic's question is fair: does any of this actually improve business outcomes? The data is unambiguous.
Relyance AI's consumer survey found that 75% of consumers would pay more for products from companies with verified AI data practices. KPMG's 2024 trust study confirmed that 83% of consumers hold technology companies primarily responsible for ethical AI use — and increasingly make purchasing decisions accordingly. EY's 2025 governance survey found direct links between responsible AI governance advancement and measurable business outcomes: faster deployment, stronger stakeholder relationships, and improved regulatory readiness.
Let me return to the fintech startup from the opening. After we rebuilt their governance structure with community participation, three things happened. First, their model accuracy in the affected neighbourhoods improved by 14% — the community advisory board identified data gaps the team had never tested for, and filling those gaps improved predictions across the board. Second, customer retention in minority-owned business segments jumped from 61% to 89% within two quarters. Third, and perhaps most telling, their relationship with the OCC shifted entirely. Regulatory conversations that had been adversarial became collaborative. The regulator cited their participatory governance structure as a model for the sector.
Their NPS in affected communities went from 23 to 67. Not because the algorithm changed dramatically — because the people affected by it finally had a seat at the table. Trust isn't a feature you ship. It's a relationship you build.
The Democratic Governance Frontier
Where is this heading? The trajectory is clear even if the timeline isn't. Data trusts — legal structures where communities collectively govern how their data is used — are moving from academic concept to practical implementation in healthcare, civic technology, and financial services. Community ownership models, where affected populations hold equity or governance rights in AI systems that affect them, are being piloted in several jurisdictions. Municipal AI councils, with binding authority over public-sector AI deployment, are emerging across Europe and in a handful of US cities.
But the frontier carries risks. Connected by Data's analysis of participatory digital governance warns pointedly about "participatory in name only" — organisations that create the appearance of community input without granting real influence. OpenAI's Democratic Inputs programme is instructive: they funded ten teams at $100,000 each to develop democratic governance prototypes, but critics rightly noted that the company retained unilateral authority over which recommendations to adopt. Participation without power is performance.
The risks extend further. The FCC's public comment process on net neutrality received over one million fraudulent comments generated by bots — a stark warning about what happens when democratic mechanisms meet adversarial AI. The Brennan Center has documented how AI can both enhance and undermine participatory democracy, depending entirely on implementation choices. The OECD's analysis found that AI-powered civic participation tools can dramatically expand who participates — but only when designed with explicit inclusion requirements. Without them, digital participation replicates and amplifies existing inequalities.
Governance Models: Authority and Accountability
| Model | Who Decides | Maturity Level |
|---|---|---|
Internal Ethics Board Company leadership retains full authority. External input is optional and non-binding. Fastest to implement but narrowest perspective. Common | Community Advisory Board Shared advisory authority. Community members provide structured input; company commits to transparency about adoption. Most practical starting point for startups. Emerging | Data Trust Independent trustees govern data use on behalf of a defined community. Legal structure with fiduciary duties. Growing in healthcare and civic tech. Experimental |
Democratic AI Council Elected or appointed body with binding or semi-binding authority over AI governance decisions. Highest legitimacy, highest complexity. Frontier |
The maturity test I use with founders is simple: who has veto power? If only your internal team can block a deployment, you're in the Inner Orbit. If your community advisory board can trigger a mandatory review, you're in the Middle Orbit. If affected communities can halt a deployment they believe is harmful, you've reached the Outer Orbit. Most companies aren't ready for the Outer Orbit — and that's fine. The question isn't where you are. It's whether you're moving outward.
The test of genuine participatory governance isn't whether you consult communities. It's whether those communities can say no — and be heard.
The Long Game
This is the final instalment of the Responsible AI Playbook for Founders, and I want to close with something personal. Over the past fifteen years, I've advised hundreds of founders building AI products. The ones who build enduring companies — the ones still standing after regulatory shifts, market corrections, and public trust crises — share one characteristic. They don't build for communities. They build with them.
Part 1 gave you the principles. Part 2 gave you the operational governance stack. Part 3 showed you how to prototype responsibly. This article — Part 4 — asks you to go further. To recognise that the most sophisticated internal governance in the world still has blind spots that only the people you serve can see. To accept that distributing governance power makes your product stronger, not weaker. To understand that the Solidarity Moat — community trust earned through genuine participation — is the only competitive advantage that compounds over time.
The AI governance frontier isn't a technical problem. It's a democratic one. And the founders who reach that frontier first won't just build better products. They'll build the institutions that shape how AI serves society for the next generation.
Your action item: Before your next product milestone, identify the three communities most affected by your AI's decisions. Reach out to one community leader from each. Ask a single question: 'What are we not seeing?' That conversation is the first step toward the Solidarity Moat.
Download: The Responsible AI Playbook for Founders
Get the complete 4-chapter playbook worksheet: principle self-assessment matrix, governance readiness scorecard, design ethics checklist, community engagement planner, 90-day sprint, and risk tier classification — ready to print or save as PDF.
Enter your email to get instant access — you'll also receive the weekly newsletter.
Free. No spam. Unsubscribe anytime.
The Complete Series
This four-part series covers the full arc from principles to democratic governance:
- Part 1: The 10 Core Principles — The foundation. Ten principles ordered by implementation priority for AI founders.
- Part 2: From Principles to Practice — The five-layer governance stack that turns principles into operational infrastructure.
- Part 3: Design-Led Strategy and Prototyping — How to prototype responsibly and embed ethics into the design process.
- Part 4: The Governance Frontier — Solidarity, democratic participation, and the only non-replicable competitive advantage in AI. (You are here.)
For a broader view of the regulatory and standards environment shaping AI governance, see the OECD AI Principles guide and the EU AI Act strategic guide. To assess your organisation's readiness for participatory governance, the 5-Pillar AI Readiness Assessment provides the diagnostic starting point. And if you want to explore how these principles apply to your specific product, let's talk.
Get Weekly Thinking
Join 2,500+ AI leaders who start their week with original insights.

Senior AI strategist helping leaders make AI real across four continents. Forbes Technology Council member, IEEE Senior Member.