The risks of using models without responsible governance are very real. Left unchecked, AI systems can make damaging decisions, leading to reputational damage for businesses. Too often, AI governance is treated as an afterthought, a checkbox for compliance, or a vague policy buried in legal documentation. ISO/IEC 42001 is one solution to this problem. It is the world’s first AI management system standard, built to close the governance gap. It requires organizations to embed structured policies, named ownership, impact assessments, human oversight, explainability, and continuous monitoring. For IT leaders and AI teams, adopting ISO/IEC 42001 as part of the AI procurement criteria guarantees that a focus is placed on procuring responsible, auditable, and trustworthy models.
The ISO/IEC 42001 Framework
The ISO/IEC 42001 defines a structured governance model that helps AI developers manage risk throughout the full AI lifecycle. It demands that they embed policies, assign clear roles, monitor model behavior, assess impact, and continuously improve. The standard is not a one-time checklist, but an evolving system for responsible AI. For businesses purchasing AI, poor governance is no longer enough. This leads to bias, the inability to explain model behavior, and poses operational, reputational, or regulatory risk. ISO 42001 gives a proactive, auditable framework that provides documented evidence that an AI model is safe. Some major AI providers are already ISO/IEC 42001 certified. IBM gained this certification in 2025 for its IBM Granite open-source language models. Microsoft also received this certification in 2025 for its Azure AI Foundry Models and Microsoft Security Copilot.
Evaluating Non-Certified AI Developers
If an AI developer is not ISO/IEC 42001 certified, this does not automatically mean that their AI is irresponsible. What matters is how closely their governance matches the substance of ISO 42001. The seven areas are what to look for when judging equivalence.
- Leadership and governance. A strong governance framework should clearly define roles, responsibilities, and decision‑making bodies. There should be senior leadership buy‑in and a governance board or committee monitoring ethical issues. If an AI developer has committed executives, documented policies, and accountability baked into their structure, that’s a powerful signal of maturity.
- Business context and stakeholder alignment. The AI developer should explicitly assess and document its internal and external environment, like regulatory obligations and stakeholder expectations. This involves mapping interested parties (customers, regulators, users) and aligning their AI systems to that context.
- Risk management. Equivalent governance will treat risk-based thinking as central. Look for processes where the AI developer identifies, assesses, and mitigates specific AI‑related risks: bias, lack of transparency, security, privacy, and misuse. They should have documented risk assessments, mitigation plans, and regular reviews.
- Skills and infrastructure. An AI developer can show equivalence by investing in human and technical resources. This means competent staff, training programs, and infrastructure. Look for evidence of ongoing training, awareness initiatives, and enough budget and tools to operate a robust AI governance regime.
- AI lifecycle management. The AI developer should manage their AI systems throughout the lifecycle: data collection and preparation, model development and testing, deployment, and even retirement. They should also maintain documentation for each stage for transparency.
- Tracking and assessment. The AI developer should run internal audits and reviews to monitor model performance (for example, drift, bias, and accuracy). They should use KPIs, metrics, and evaluation mechanisms to assess how well their governance works in practice.
- Continuous advancement. Their AI governance should not be static; they must actively learn and iterate. They should correct deviations and update their policies or controls as risks evolve or new stakeholders emerge.
Recommendations
- Include ISO/IEC 42001 certification in your vendor criteria. Require ISO/IEC 42001 certification in your vendor RFPs but remain open to vendors who can clearly demonstrate equivalent governance practices. Ask vendors to walk you through their AI governance process and provide details on the seven areas previously discussed that mirror ISO/IEC 42001.
- Do not treat the certification as absolute assurance. The certification is a good indicator, but you still need to evaluate the developers yourself. Ask developers how they handle dataset quality, monitor for bias, trace model decisions, and enforce governance. Use their responses to validate real-world practices, not just certification claims.
- Maintain your governance after deployment. Even with certified AI systems, the work does not stop after deployment. Continue to apply your standard security and governance protocols: set up ongoing monitoring, track key performance metrics (for example, fairness and drift), and ensure safeguards remain actively enforced in production.
Bottom Line
Using AI drives innovation, but without robust frameworks, it opens the door to risk and mistrust. ISO/IEC 42001 ensures that AI developers embed governance and transparency into their AI. IT leaders and AI teams can embed this standard into procurement to ensure that only responsible AI is used in their businesses.
References
- IBM becomes first major open-source AI model developer to earn ISO 42001 certification, Emma Gauthier and Derek Leist, IBM, October 1, 2025
- Microsoft Azure AI Foundry Models and Microsoft Security Copilot achieve ISO/IEC 42001:2023 certification, Molly Bostic, Microsoft Azure Blog, July 17, 2025