Key Takeaways
- →98% of organisations report unsanctioned AI use — shadow AI is the largest unmanaged third-party risk
- →The Workday ruling made AI vendors legally liable as agents, not just tools
- →88% of AI vendors cap liability at the subscription fee while your exposure is uncapped
- →Only 17% of AI vendors commit to regulatory compliance
- →Govern all three layers: shadow AI, vendor AI, and API dependencies
The blind spot in every enterprise AI strategy
The Risk You Can't See: 98% of Your Organization Uses Unauthorized AI
Here is the third-party AI risk that nobody is governing: 98% of organizations report unsanctioned AI use. Not 50%. Not 75%. Ninety-eight percent. Your employees are uploading confidential data to ChatGPT, pasting customer records into Claude, feeding proprietary code to Copilot — and they are doing it right now, while you read this. Shadow AI breaches cost $4.63 million on average — $670,000 more than standard incidents. Shadow AI accounts for 20% of all data breaches. And 97% of organizations with AI-related breaches lacked proper access controls. The pattern is consistent and damning: the AI risk that will hit your organization hardest is the AI you do not know about.
But shadow AI is only the most visible layer of a much deeper problem. You govern your own AI systems. You have policies for the models your data science team builds. You might even have an AI ethics board. But what about the AI inside your vendors' products? The AI making decisions on your data inside Salesforce Einstein, Workday, and ServiceNow? The AI your product depends on through APIs from OpenAI, Anthropic, and Google? Third-party AI risk is the gap between the AI you govern and the AI that governs your outcomes. And for most organizations, that gap is enormous.
The Third-Party AI Risk Gap
The invisible risk surface most governance programs miss
Sources: Netwrix 2026, IBM Cost of a Data Breach 2025
The financial exposure is not hypothetical. IBM's 2025 Cost of a Data Breach report found that shadow AI incidents take 247 days to detect — nearly eight months of undetected data exposure. 65% of shadow AI breaches compromised customer PII, versus lower rates in standard breaches. And Gartner predicts that 40% of enterprises will experience security or compliance incidents linked to shadow AI by 2030. The cost is compounding. The risk surface is expanding. And only 37% of organizations have AI governance policies of any kind — let alone policies that extend to the AI inside their vendors' systems.
80% of workers — including nearly 90% of security professionals — use unapproved AI tools at work. 77% of employees paste sensitive business data into AI tools, with 82% doing so from unmanaged accounts. Sensitive data makes up 34.8% of employee ChatGPT inputs, up from 11% in 2023. These are not edge cases. They are the norm. Every prompt sent to a third-party AI model is data leaving your enterprise — with no control over how it is stored, whether it trains the vendor's model, or how long it is retained.
Every unsanctioned AI tool is an unvetted third-party vendor. When an employee pastes company data into ChatGPT using a personal account, they have created an unauthorized vendor relationship with OpenAI — no SLA, no data protection agreement, no audit rights. Shadow AI is not an employee behavior problem. It is the largest unmanaged third-party risk in most enterprises.
This article maps the full third-party AI risk landscape — from shadow AI to vendor AI to API dependencies — and provides the governance framework, contract clauses, and assessment methodology to close the gap. It is written for CPOs negotiating vendor contracts, CISOs mapping risk surfaces, Chief Risk Officers building governance programs, and CTOs managing API dependencies. The research draws on IBM's 2025 Cost of a Data Breach report, the Mobley v. Workday ruling, the EU AI Act's deployer obligations, and analysis of vendor contract terms across the AI SaaS market.
Three Layers of Risk You're Probably Not Governing
Third-party AI risk is not a single category. It operates in three concentric layers, each expanding the risk surface beyond what most governance frameworks cover. The organizations that understand this layered structure will govern effectively. The organizations that treat it as a single problem will miss two-thirds of their exposure.
Layer 1: Shadow AI — The Unmanaged Vendor Relationship
Shadow AI is every AI tool your employees use without organizational approval. ChatGPT on a personal account. Claude accessed through a browser. GitHub Copilot activated by a developer without IT knowledge. 68% of employees use unauthorized AI tools, up from 41% in 2023 — a 66% increase in two years. 93% of executives and senior managers use shadow AI tools, which means the people with access to the most sensitive information are also the most active shadow AI users.
The data exposure is staggering. 59% of employees use shadow AI, and 39.7% of their interactions involve sensitive data. 47% of generative AI users access tools through personal accounts, completely bypassing enterprise controls. Every one of these interactions is an unauthorized data transfer to a third party. And unlike a traditional shadow IT risk — an employee using an unapproved SaaS tool — shadow AI involves sending your actual data to the vendor, not just creating an unmanaged account. The data has left the building. You cannot recall it.
Layer 2: Vendor AI — The AI Inside Your SaaS Stack
This is the layer most organizations miss entirely. You did not buy AI. You bought a CRM, an HR system, a finance platform. But those vendors have embedded AI into their products — AI that makes decisions on your data. Salesforce Einstein scores your leads. Workday AI screens your job applicants. ServiceNow uses AI to route your support tickets. These are not optional AI features you activated. They are default behaviors embedded in the products you already use. And they create liability that flows to you, not the vendor.
The risk is structural. 92% of AI vendors claim broad data usage rights. Only 17% of AI vendors commit to full regulatory compliance — compared to 36% in broader SaaS. Only 33% provide indemnification for third-party IP claims. And most AI vendor agreements are recycled SaaS templates that fail to address unique AI risks — no provisions for model drift, no bias testing obligations, no transparency about how your data trains the vendor's model. You are governed by contracts that were written for a pre-AI world.
Layer 3: API Dependencies — Your Product Runs on Someone Else's AI
The third layer affects every organization that has integrated external AI APIs into their products or operations. Your customer service chatbot runs on OpenAI's API. Your fraud detection uses a third-party model. Your recommendation engine calls an external service. Each of these dependencies is a risk vector: the API changes, your product breaks. The API goes down, your service goes down. The vendor changes its data policies, your compliance posture changes. Supply chain-related breaches increased approximately 40% since 2023. Supply chain attacks accounted for 47% of total affected individuals in H1 2025, with an average cost of $4.91 million.
The concentration risk multiplies the exposure. If 40% of the banking sector's customer service AI runs on the same foundation model, a vulnerability hits 40% of the sector at once. This is not a theoretical concern — it is the structural reality of an AI market dominated by a small number of foundation model providers. Your vendor risk is not just about your vendors. It is about your vendors' vendors. If your AI vendor uses OpenAI's API, you are effectively using OpenAI. Your due diligence must extend to these fourth parties.
Three Concentric Layers of Risk
Each layer expands the risk surface beyond your governance boundary
Sources: Netwrix, CIO/TermScout
Layer 1 (Shadow AI) creates unauthorized vendor relationships. Layer 2 (Vendor AI) creates ungoverned decision-making. Layer 3 (API Dependencies) creates systemic fragility. Most governance frameworks address only Layer 1. The organizations that govern all three layers will have a structural advantage over those that do not.
The Court Ruling That Rewrote AI Vendor Liability
In January 2025, a federal court in San Francisco issued a ruling that fundamentally changed the relationship between enterprises and their AI vendors. In Mobley v. Workday, the court held that AI vendors are not mere tools — they are "agents" participating in employment decisions. The ruling means that Workday, as the vendor providing AI-powered applicant screening, could face direct liability for discriminatory outcomes produced by its algorithms. Before this ruling, the vendor could argue it was just providing software. After it, the vendor is a participant in the decision.
The case began in 2023 when Derek Mobley filed suit alleging that Workday's AI screening tools discriminated on the basis of race, age, and disability. The court's key finding was precise and devastating: Workday's software "is not simply implementing in a rote way the criteria that employers set forth, but is instead participating in the decision-making process". In May 2025, the court certified a nationwide class action covering all applicants over age 40 rejected by Workday's AI screening. In July 2025, the scope expanded to include individuals processed using Workday's HiredScore AI features.
The implications extend far beyond hiring. Any AI vendor making or influencing decisions — lending, insurance underwriting, content moderation, customer service routing — is now potentially an "agent" subject to direct liability. A single biased algorithm can multiply discrimination across hundreds of employers and thousands of applicants, making the scale of harm (and therefore liability) far larger than traditional discrimination cases. This is not about one vendor or one use case. It is about the fundamental legal relationship between enterprises and every AI system that participates in consequential decisions.
The liability gap that nobody negotiates
Here is the structural problem: 88% of AI vendors cap their liability at the monthly subscription fee. Your Workday subscription might cost $50,000 per year. The CFPB penalty for a single discriminatory AI decision can reach $2.5 million per incident. Enterprises face uncapped regulatory liability for decisions made by vendor AI they cannot examine, using training data they cannot audit, with decision-making logic they cannot fully understand.
The Liability Asymmetry
88% of AI vendor contracts create this imbalance
Sources: CIO/TermScout, Seyfarth Shaw
The math is simple and devastating. Your vendor's maximum exposure: $50,000. Your exposure per incident: $2.5 million. That is a 50:1 liability asymmetry. And it is not an accident — it is the default contractual position of 88% of AI vendors. They drafted the contract. They set the cap. And most procurement teams signed it without negotiating the AI-specific provisions because those provisions did not exist in the template.
Only 17% of AI vendors provide warranties for regulatory compliance. The other 83% are selling you AI that makes decisions on your data, creates liability for your organization, and offers no guarantee that it complies with the regulations you are subject to. Employers remain fully liable under Title VII when vendor AI tools produce discriminatory outcomes. Outsourcing parts of decision-making to vendor AI does not outsource liability. It concentrates liability in the organization with the least visibility into how the AI works.
Your vendor contract says their liability is $50K per year. The CFPB says yours is $2.5M per incident. That is the gap. And the Workday ruling means your vendor is now your legal "agent" — their bias is your liability. Print your AI vendor contracts. Check the liability cap. Then ask whether that cap reflects your actual exposure.
The Workday ruling also changes the procurement conversation. Before Mobley, vendor AI risk was a theoretical discussion. After Mobley, it is a demonstrated legal reality. CPOs and procurement leaders must now require bias testing documentation, model cards, and audit rights before signing. In-contract, they need indemnification clauses specific to AI-driven discrimination claims. And ongoing, they must conduct independent bias audits on vendor AI systems used in employment, lending, insurance, and customer-facing decisions. Vendors who refuse to provide algorithmic transparency or who cap liability at trivial amounts for high-risk use cases should be rejected.
For a deeper analysis of how accountability structures must evolve to address AI-driven decisions, see our exploration of the Delegation Deficit — the gap between the authority organizations grant AI and the accountability structures governing those decisions.
When Third-Party AI Failed
The risks described above are not theoretical. Every one of them has materialized in documented incidents. These five case studies illustrate the range of third-party AI failures — from supply chain breaches to vendor insolvency to API outages — and the governance gaps that enabled them.
Case 1: The Drift/Salesloft supply chain breach
In August 2025, attackers exploited integrations tied to Salesloft's Drift AI chatbot to compromise OAuth tokens and pivot into Salesforce and Google Workspace environments across 700+ organizations. This was not an LLM jailbreak or a prompt injection attack. It was a supply chain event rooted in OAuth trust, API access, and privileged integration scopes. The breach demonstrated that AI vendor risk extends beyond the model itself to the entire integration layer — the authentication tokens, the API permissions, and the data access patterns that AI tools require to function.
Case 2: McDonald's AI voice ordering failure
McDonald's ended its AI-powered voice ordering partnership with IBM in July 2024 after the pilot encountered persistent problems with different dialects and failed to meet accuracy standards. Customers reported wrong orders, bizarre additions, and an inability to communicate corrections. Separately, McDonald's McHire.com platform (powered by Paradox.ai) exposed personal data for approximately 64,000 applicants through default admin credentials. Two vendor AI failures. Two different risk categories. One brand bearing the reputational cost.
Case 3: Builder.ai insolvency
Builder.ai, valued at $1.3 billion and backed by Microsoft and the Qatar Investment Authority, entered insolvency proceedings in May 2025. Customers scrambled to protect digital assets and operations. When an AI vendor folds, prompt logs, fine-tuned models, embeddings, and outputs hosted on their cloud can vanish. Bankruptcy means AI assets become part of the estate; a court-appointed trustee decides their fate; LLMs and datasets may be auctioned to the highest bidder. Your data. Their bankruptcy court. No portability clause? No recourse.
Case 4: ChatGPT 34-hour outage
In June 2025, ChatGPT experienced its longest outage in history: 34 hours. Twenty-one components failed simultaneously in what was described as a systemic architectural failure. Businesses dependent on the API for customer service, content generation, and document processing experienced a full business continuity crisis. 120 million daily active users were affected. Marketing agencies reported losing $500 to $5,000 per outage in billable hours. And ChatGPT Plus subscribers have no SLA guarantees — only Enterprise customers have formal uptime commitments.
Case 5: AI supply chain attack on logistics SaaS
In early 2025, threat actors deployed self-learning malware that infiltrated a leading logistics SaaS provider's update servers, injecting malicious code into its core platform. Operations were disrupted for 500+ global retailers. The attack did not target any individual enterprise. It targeted the vendor layer — the shared infrastructure that hundreds of organizations depend on. Your security posture is only as strong as your weakest vendor's security posture.
Five Vendor AI Failures
Five incidents, five different failure modes, zero originated inside the enterprise
Sources: ProcessUnity, TechTarget, CodeKeeper, DataStudios, NeuralTrust
Five incidents. Five different failure modes: supply chain breach, accuracy failure, vendor insolvency, API outage, and malware propagation. The common thread: none of these failures originated inside the enterprise. All of them created enterprise-level consequences. Third-party AI governance is not optional.
Regulators Are Coming for Your Vendor Relationships
The regulatory environment for third-party AI risk is evolving faster than most governance programs can track. Six major frameworks now impose specific obligations for how enterprises manage AI provided by third parties. None of them accept "our vendor did it" as a defense.
EU AI Act: deployers are independently liable
The EU AI Act makes the enterprise (deployer) independently liable for high-risk AI systems, regardless of who built them. Deployers must verify technical documentation, confirm conformity assessments, implement human oversight, and inform affected individuals — obligations that exist separately from the provider's obligations. You cannot point at your vendor and say "they were the provider." Full application for high-risk AI systems takes effect in August 2026, with prohibited practices and AI literacy requirements already effective since February 2025. The EU has also published Model Contractual Clauses for AI procurement (MCC-AI), establishing a template for how enterprises should structure vendor relationships.
DORA: continuous monitoring of AI third parties
The Digital Operational Resilience Act (DORA) entered into application in January 2025 and applies to 20 types of financial entities and their ICT third-party service providers — including AI vendors. DORA requires a comprehensive register of all ICT third-party arrangements, active (not periodic) monitoring of third-party risks, designation of critical ICT providers, concentration risk analysis, and sub-contractor oversight extending to fourth-party AI dependencies. The Annual Register of Information submission was due in early 2026 with detailed documentation of every vendor relationship.
NSA, SEC, and NIST: the convergence
Three additional frameworks are converging on the same conclusion. The NSA published AI supply chain guidance in March 2026, recommending that organizations require an AI Bill of Materials (AIBOM) and Software Bill of Materials (SBOM) from all AI vendors. The SEC's 2026 examination priorities highlight third-party risk management, with the Division of Examinations evaluating vendor AI use. And the NIST AI Risk Management Framework's MAP function covers supply chain and third-party risk, requiring organizations to inventory all third-party AI systems and identify fourth-party dependencies. The message from regulators is unanimous: your vendors' AI is your risk surface.
The Regulatory Convergence
Six frameworks, one conclusion: your vendors' AI is your liability
Sources: EU AI Act, DORA, NSA, SEC, NIST AI RMF, CFPB
The regulatory pattern is clear: every major framework treats the enterprise as independently liable for third-party AI. The EU AI Act, DORA, NSA guidance, SEC priorities, and NIST AI RMF all require proactive vendor governance. The question is not whether you need a third-party AI risk program. The question is whether you have one before the next enforcement action.
For a comprehensive walkthrough of the NIST AI Risk Management Framework and how it maps to your existing governance, see the NIST AI RMF Practitioner's Guide and Crosswalk. For EU AI Act compliance obligations, see the EU AI Act Strategic Guide.
How to Govern AI You Don't Own
Governing third-party AI requires a five-step framework that extends your existing governance beyond the boundaries of your organization. Each step maps to a specific organizational capability. The framework is sequential — each step depends on the one before it — and the organizations that implement all five will have a governance posture that most competitors lack entirely.
Step 1: AI Inventory (Extended)
Map every AI system touching your organization — not just the AI your team built, but the AI inside your vendors' products and the AI your employees are using without approval. This extended inventory has three sections. Internal AI: models your data science team builds and deploys. Vendor AI: AI features embedded in your SaaS stack (CRM, HR, finance, customer service). Shadow AI: unauthorized tools employees use. 90% of CISOs say shadow AI is a significant concern, but fewer than 30% have implemented technical controls beyond policy. Policy alone does not create an inventory. You need technical discovery — network monitoring, browser extension audits, SSO logs — to find the AI your employees are actually using.
Step 2: Vendor AI Assessment
Before every AI vendor contract — new or renewal — run a structured assessment across six dimensions: data governance and privacy (25% weight), security and technical risk (20%), bias and fairness (15%), transparency and explainability (15%), business continuity and resilience (15%), and regulatory compliance (10%). This is the Vendor Assessment Scorecard — a companion tool to this article with 30 questions across these six categories. The assessment produces a composite risk score that determines whether the vendor relationship requires enhanced governance, standard governance, or rejection.
Step 3: Contract Provisions
Standard SaaS contracts do not address AI risk. You need ten specific governance clauses negotiated into every AI vendor contract. These clauses cover model documentation, audit rights, bias testing, data usage restrictions, incident notification, liability allocation, data portability, performance monitoring, regulatory compliance, and termination provisions. Section 7 of this article provides the full list with implementation guidance for each clause.
Step 4: Continuous Monitoring
Annual reviews do not catch an AI system that started hallucinating last Tuesday. Vendor AI monitoring must be continuous, not periodic. This means tracking vendor model version changes, monitoring for performance drift, reviewing sub-processor changes, flagging regulatory enforcement actions against vendors, and running independent bias audits on vendor AI used in high-risk decisions. The TPRM market is projected to more than double from $9 billion in 2025 to $19.9 billion by 2030 — because the market recognizes that periodic assessments are not sufficient for AI governance.
Step 5: Exit Planning
94% of organizations are concerned about vendor lock-in. 45% say vendor lock-in has already hindered their ability to adopt better tools. 57% of IT leaders spent $1 million or more on platform migrations in the last year. Exit planning is not pessimism — it is governance. Every AI vendor relationship should include documented data portability rights, model export capabilities, transition assistance obligations, and a maximum acceptable switching cost. When Builder.ai went bankrupt, the organizations with exit clauses preserved their data. The organizations without them lost everything.
The Third-Party AI Governance Framework
Five sequential steps to govern AI you don't own
Framework based on synthesis of ISACA, Cranium AI, TrustArc
The five-step framework is sequential: inventory before assessment, assessment before contract, contract before monitoring, monitoring before exit planning. Skipping steps creates the same governance gaps that enabled every case study in Section 4. Start at Step 1. Do not skip to Step 3 because contracts feel more urgent than inventory.
10 Governance Clauses for Every AI Vendor Contract
Most AI vendor agreements are recycled SaaS templates that were drafted before AI was a material component of the product. They address uptime and data storage but not model transparency, bias liability, or training data usage. These ten clauses, synthesized from Gouchev Law, Holon Law, Internet Lawyer Blog, CCSD Council, and Stanford CodeX, close the gap between standard SaaS contracts and the governance that AI vendor relationships require.
- 1. Data Use Restrictions. Explicit "no training," "no commingling," and "no retention" clauses for enterprise data. Your data feeds the vendor's product. It does not feed the vendor's model.
- 2. Model Transparency. Right to model cards, evaluation reports, and training data provenance documentation. If the vendor cannot explain how its model works, you cannot assess its risk.
- 3. Audit Rights. On-site or remote inspection rights. Annual evidence review. Algorithmic decision-making examination. The right to bring your own auditor.
- 4. Bias and Fairness Warranties. Documented bias testing results. Ongoing fairness monitoring obligations. Specific remediation timelines for identified bias.
- 5. Sub-Processor Transparency. Full list of sub-processors (including foundation model providers). Advance notice before additions. Right to object or terminate if a sub-processor is unacceptable.
- 6. Incident Response. Time-bound notification and response requirements for AI-specific incidents: drift, hallucination, bias, data exposure. Not just "security incidents" — AI incidents.
- 7. Indemnification. Explicit coverage for AI-driven discrimination claims, IP infringement, and regulatory penalties. Not capped at the subscription fee for high-risk use cases.
- 8. Data Portability and Exit. Export rights for models, embeddings, fine-tuning data, and outputs. Transition assistance obligations. Maximum 90-day exit window.
- 9. Performance SLAs. Uptime guarantees. Accuracy and quality metrics with baselines. Remedies for degradation — not just credits, but the right to terminate.
- 10. Regulatory Compliance. Explicit warranties for applicable regulations (EU AI Act, GDPR, sector-specific). Flow-down obligations to sub-processors. Annual compliance attestation.
Print this list. Bring it to your next vendor negotiation. If your vendor pushes back on clauses 1, 3, or 7, that tells you everything you need to know about how they view their liability versus yours.
Contracts should also include continuous monitoring triggers requiring reassessment when there is a material change in the vendor's model version, a change in the sub-processor list, a reported regulatory enforcement action, or significant performance deviation. These triggers ensure that the governance established at contract signing does not erode as the vendor's AI evolves. An AI system that was compliant at signing may drift into non-compliance after a retraining cycle — and your contract should require the vendor to notify you when that happens.
Your Food Delivery App Depends on 5+ AI APIs
Third-party AI risk is not just an enterprise problem. Every modern startup that integrates AI — and in 2026, that is most of them — is building on a stack of third-party AI dependencies that create concentrated risk. Consider a typical food delivery startup. On any given order, six or more AI systems are making decisions:
- Route optimization (Google Maps AI / Mapbox) — an API pricing change or outage cascades to delivery times and costs
- Demand forecasting (internal model on AWS/Azure) — cloud vendor lock-in; model drift without monitoring
- Customer service chatbot (OpenAI / Anthropic API) — hallucination risk; brand reputation; no SLA on non-enterprise plans
- Fraud detection (Stripe Radar / Sift) — false positives alienating customers; bias in fraud scoring
- Personalized recommendations (OpenAI API / internal) — data training on customer preferences; privacy risk
- Payment processing AI (Stripe / Adyen) — regulatory compliance delegation; data residency obligations
Each of these is a risk vector. One goes down, your product breaks. One has a bias, your customers are affected. One changes its data policy, your compliance posture shifts. 91% of food delivery app development projects burn $500K+ in their first year, and third-party API dependencies are a significant contributor to unexpected cost escalations. Startups rarely have budget for multi-vendor AI strategies, meaning a single API change can break core functionality with no fallback.
The regulatory exposure is real for startups too. If you use AI APIs to serve EU customers, you are a deployer under the EU AI Act with full compliance obligations regardless of your company size. The CSA warns that API security in the AI era requires fundamentally different approaches because every API integration is a trust boundary where data leaves your control. Startups typically have zero vendor governance infrastructure — and the ones that build it early will scale AI faster and more safely than those that treat governance as a post-Series-B problem.
Startups: map your AI API dependencies today. List every external AI service your product calls. For each one, answer: what happens if this API goes down for 24 hours? What happens if the vendor changes its pricing by 10x? What happens if the vendor is acquired and the acquirer changes the data policy? If you cannot answer these questions, you do not have a resilience plan. You have a prayer.
Where to Start
Third-party AI risk connects to every dimension of governance. The reading path from here depends on your role and your most pressing exposure.
Your Third-Party AI Risk Reading Path
B4: Third-Party AI Risk
The three layers, the Workday ruling, the governance framework, and the 10 contract clauses. You are here.
Vendor Assessment Scorecard
30 questions across 6 dimensions. The companion tool for operationalizing vendor AI assessment.
Liability Ledger
Privacy Debt compounds fastest in ungoverned third-party relationships. Map your compounding liability.
A7 Readiness Framework
Vendor readiness is a core dimension of agentic AI deployment. Assess whether your vendors are agent-ready.
EU AI Act Guide
The deployer obligations that make you independently liable for vendor AI. Know your regulatory surface.
The cross-references are structural, not ornamental. The Liability Ledger's Privacy Debt category captures the compounding cost of ungoverned vendor AI relationships. The A7 Readiness Framework's vendor readiness dimension determines whether your third-party ecosystem supports agentic deployment. The EU AI Act Strategic Guide maps the deployer obligations that make you independently liable for your vendors' AI. The NIST Practitioner's Guide provides the MAP function methodology for inventorying third-party AI. And the Ethical Debt Scoring Method quantifies the governance gaps that third-party AI relationships create. These are not separate conversations. They are one conversation about governing AI that operates beyond your organizational boundary.
Download: Third-Party AI Risk Vendor Governance Toolkit
Get the complete toolkit: AI Vendor Assessment Scorecard (30 questions, 6 dimensions), 10 contract clauses ready for legal review, extended AI inventory template, continuous monitoring checklist, and exit planning worksheet — ready to print or save as PDF.
Enter your email to get instant access — you'll also receive the weekly newsletter.
Free. No spam. Unsubscribe anytime.
The final word belongs to the numbers. 98% of organizations report unsanctioned AI use. 88% of AI vendors cap liability at the subscription fee. Only 17% commit to regulatory compliance. Shadow AI breaches cost $670K more. The gap between the AI you govern and the AI that governs your outcomes is the largest unaddressed risk in enterprise AI. The organizations that close it will be the ones whose AI investments survive contact with reality. The distinction in 2026 is not who adopts AI fastest, but who governs AI best.
Get Weekly Thinking
Join 2,500+ AI leaders who start their week with original insights.

Senior AI strategist helping leaders make AI real across four continents. Forbes Technology Council member, IEEE Senior Member.