Efforts to govern AI to date have relied heavily on voluntary principles and high-level ethics guidelines emphasising fairness, transparency, accountability, and safety. While useful for setting the tone of responsible AI development, guidelines often lacked the detail and enforceability needed to offer genuine assurance.
The gap has created uncertainty for organisations seeking to demonstrate responsible AI practices and for governments wanting to protect the public interest without stifling innovation. In response, global standards-setting bodies like the International Organization for Standardization (ISO) have begun developing more structured, formalised approaches to AI governance which have an important role to play in AI assurance.
AI-specific legislation
Although AI’s impact is global, legal frameworks dedicated specifically to AI are thin on the ground. The European Union’s AI Act, adopted in 2024, stands out as the world’s first comprehensive legal framework explicitly classifying AI systems by risk and imposing obligations proportionate to that risk. It sets out mandatory requirements for high-risk systems, such as transparency, human oversight, robust documentation, and quality management, and bans certain unacceptable uses altogether. South Korea is another global exception. Its 2023 AI Act aims to ensure safe development and deployment, strengthen explainability and risk mitigation requirements, and encourage the growth of the domestic AI ecosystem through clearer rules.
Beyond these two examples, most countries have no dedicated AI legislation. Instead, AI systems are governed indirectly through laws designed for other purposes, such as data protection, consumer protection, non-discrimination, and competition. For instance, the United States lacks a federal AI law, though it has produced executive orders and non-binding guidelines. China, meanwhile, regulates AI through sectoral rules and content moderation laws but has no single overarching AI statute.
This rather fragmented landscape creates uncertainty for developers, deployers, and users of AI systems. Organisations operating internationally must navigate diverse requirements, sometimes with conflicting expectations, while also responding to increasing public demands for transparency and accountability.
Defining AI assurance
In an uncertain regulatory environment AI assurance has emerged as a way for organisations to demonstrate that their AI systems meet expectations for safety, ethics, legality, and performance. Assurance activities are diverse: they can include risk assessments, testing and validation, impact assessments (for bias, safety, social effects), audits, and transparent documentation of development processes. (EthicAI’s BeehAIve® platform pulls these all together in one dashboard interface to provide a organisation-wide view of all assurance and compliance activities).
The goal is to provide confidence – to customers, regulators, business partners, and the public – that an AI system has been developed responsibly and will behave as intended. In mature regulated sectors such as pharmaceuticals or aviation, assurance is mandatory and highly formalised. For AI, however, assurance remains largely voluntary outside of the few jurisdictions with AI-specific laws.
This is where standards can become especially valuable. By offering agreed, detailed frameworks, they can help organisations structure their internal governance, manage risks, and signal responsibility even in the absence of legislation. They can also support regulators in enforcing requirements by providing common benchmarks that can be adopted or referenced.
The role of international standards
International standards don’t have the force of law. Rather, they offer shared expectations and practical guidance that can be adopted voluntarily or incorporated into regulations, procurement requirements, or contractual obligations. For AI, where legal requirements vary widely, global standards can harmonise best practices across borders, reducing uncertainty and making compliance more manageable for organisations operating internationally.
Until recently, there were few truly AI-specific international standards. ISO, working with the International Electrotechnical Commission (IEC), has begun to address this gap with a suite of standards focused on AI management. The first three core standards in this series – ISO/IEC 42001, ISO/IEC 42005, and ISO/IEC 42006 – were published or finalised between late 2023 and 2024. Each plays a different role in supporting responsible AI development and use.
ISO/IEC 42001:2023 – the AI management system standard
ISO/IEC 42001 is the centrepiece of the new standards family. Officially titled “Artificial Intelligence Management System (AIMS) Requirements,” it is modelled on well-known management system standards such as ISO 9001 (quality management) and ISO 27001 (information security management).
The standard sets out requirements for establishing, implementing, maintaining, and continually improving an AI management system within an organisation. It demands clear leadership commitment, defined roles and responsibilities for AI governance, and systematic identification and treatment of AI-related risks. Organisations must adopt policies and objectives aligned with their context and stakeholder needs and maintain operational controls for design, development, testing, monitoring, and deployment to ensure AI quality and safety.
Critically, ISO 42001 is certifiable. Organisations can undergo independent third-party audits to demonstrate compliance. This gives customers, regulators, and the public an externally verified signal that the organisation has adopted a structured approach to managing AI risks.
However, ISO 42001 is process-based and technology-agnostic. It does not prescribe specific technical measures or outcomes for fairness, safety, or ethics. Instead, it requires organisations to have robust systems in place for identifying and managing those issues appropriately. This flexibility makes it widely applicable across sectors and use cases but also means outcomes will depend heavily on how rigorously organisations implement and maintain their systems.
ISO/IEC 42005 – guidance on impact assessments
ISO/IEC 42005, finalised in 2024, provides guidance on conducting AI impact assessments (AI-IAs), a key tool for anticipating and mitigating the potential harms of AI systems.
An AI impact assessment is essentially a structured process for evaluating the positive and negative consequences of an AI system, both intended and unintended. ISO 42005 defines a standardised methodology for carrying out such assessments, encouraging organisations to begin by defining the system and its context, engaging with stakeholders, identifying relevant impact categories (such as safety risks, privacy violations, discrimination, security vulnerabilities, social and environmental effects), assessing their likelihood and severity, and designing mitigation measures.
Rather than imposing a single approach or set of thresholds, ISO 42005 is intended to be flexible, allowing organisations to tailor their assessments to their own context while ensuring consistency and rigour. It is designed to work seamlessly as part of an ISO 42001-compliant AI management system but can also be used as stand-alone guidance by organisations not seeking full certification.
Unlike ISO 42001, ISO 42005 is not certifiable on its own. Its role is to deepen the quality of assurance processes by standardising how organisations conduct one of the most critical elements of AI risk management.
ISO/IEC 42006 – guidance on transparency
The third standard in this suite, ISO/IEC 42006, also finalised in 2024, provides guidance on achieving meaningful transparency for AI systems.
Transparency is often cited as essential for trustworthy AI, but in practice it is inconsistently defined and applied. ISO 42006 tackles this by offering structured guidance on how organisations can deliver transparency that is genuinely useful to different audiences. It defines transparency not simply as making source code public, but as providing meaningful, understandable, and appropriate information about an AI system’s design, purpose, performance, and risks.
The standard recognises that transparency requirements vary by audience. For instance, end users may need clear explanations of system behaviour and limitations, while business customers or regulators may require detailed technical documentation. ISO 42006 describes artefacts such as datasheets for datasets, model cards, system cards, user-facing disclosures, and technical documentation, encouraging organisations to think carefully about who needs what information, when, and in what form.
Like ISO 42005, ISO 42006 is guidance rather than a certifiable standard. Its goal is to help organisations meet ISO 42001’s broader requirements for documentation and communication by offering practical, audience-sensitive approaches to transparency.
Taken together, ISO 42001, 42005, and 42006 represent a major step forward in operationalising responsible AI principles. They offer a shared international framework that can transcend national borders and fragmented legal regimes.
ISO 42001’s certifiable management system requirements provide a structured foundation for organisations to show they are managing AI risks systematically. ISO 42005 and ISO 42006 enrich this framework by providing practical, standardised approaches for two of the most important assurance tools: impact assessments and transparency.
This approach offers several benefits. First, it promotes consistency: customers, regulators, and business partners can evaluate AI governance practices against a recognised benchmark. Second, it supports risk management by forcing organisations to systematise their identification and mitigation of AI-related risks. Third, it facilitates international trade and procurement, as buyers can specify ISO-conformant practices without mandating bespoke local rules.
Importantly, these standards can also support emerging regulatory regimes. The EU AI Act, for example, does not specify in detail how organisations should implement risk management or transparency requirements. By adopting ISO standards, companies can more easily demonstrate compliance with these laws. Even in jurisdictions without AI-specific legislation, ISO standards can offer a credible baseline that companies can adopt voluntarily or that governments can endorse or reference in policy.
Limits of a standards-based approach
Despite their strengths, these standards also have important limitations.
They remain voluntary unless incorporated into law, procurement requirements, or contractual obligations. In many cases, organisations under commercial pressure may choose not to adopt them at all, especially if customers or regulators are not demanding assurance. ISO 42001 focuses on process rather than outcomes. While this flexibility is necessary for diverse contexts and technologies, it also means that a certified organisation could still deploy harmful AI systems if its internal processes are inadequate or performed in bad faith. Certification can demonstrate that an organisation has a management system in place, but it does not guarantee the ethical quality of every AI system it produces. Audit practices may vary in rigour. ISO certification relies on independent auditors, but their quality and independence can differ between markets. Without strong governance of the certification ecosystem, there is a risk of ‘check-box’ compliance. There is no global enforcement mechanism. Even if many countries adopt or reference ISO standards, there is no single body to police compliance or penalise organisations that fail to live up to their commitments. Standards cannot substitute for substantive regulation. Even the best-designed voluntary standard cannot ban unacceptable practices, impose liability for harms, or set societal thresholds for risk. At best, they complement legislation by offering practical tools for achieving regulatory goals.
Can global standards deliver meaningful AI assurance?
So, can ISO 42001, 42005, and 42006 meaningfully advance AI assurance?
Our AI assurance platform BeehAIve® assesses models and systems against ISO standards as part of the AI assurance process, but they are not sufficient on their own. Only legislation can create binding rules, ensure a level playing field, and protect public interests through enforceable mandates. The EU AI Act and South Korea’s AI Act show what this kind of regulation can look like. Standards alone cannot fill that role.
But they are essential complements to regulation and vital tools even where regulation is absent. By translating abstract principles into operational processes, these standards make it feasible for organisations to implement responsible AI practices in a consistent, credible way. They also support the development of an assurance ecosystem (of which EthicAI is a part) of auditors, platforms, and internal governance teams that can evaluate, improve, and certify AI management systems.
For jurisdictions without AI-specific laws, ISO standards offer an off-the-shelf baseline that companies can adopt voluntarily, helping to address risks and build trust while avoiding the costs and confusion of entirely bespoke local frameworks. For regulators, these standards offer a practical way to define and assess compliance without having to reinvent governance systems from scratch.
Ultimately, effective AI assurance will require a combination of legal requirements, industry standards, independent audits, and public accountability. ISO’s new AI management standards offer the first coherent, internationally agreed framework for one critical piece of that puzzle. For organisations serious about earning and maintaining trust in their AI systems, talk to us about how BeehAIve® can pull this all together.