The rapid development of advanced AI systems has pushed the sector into a debate framed as a choice between closed and open AI models. But this doesn’t accurately reflect how models are built, released or used. A recent report “Beyond the Binary” argues that whether an AI model is ‘open’ or ‘closed’ is far less important than the context of release, safety readiness, and governance preparedness.
Open-weight AI models – those whose model weights (the numerical parameters learned during training) are publicly released and can be downloaded, adapted or fine-tuned – are becoming ever more capable, often approaching or matching the performance of closed models. Depending on the application they can exceed the performance of closed models, particularly when fine-tuned.
The spread of these models is reshaping the global AI landscape, driving innovation, research, and accessibility. But it also can magnify risks: once an open-weight model is released, it can be reused, repurposed, or tweaked – with little to prevent misuse. So the choice is no longer simply a question of open versus closed, it’s a question of what is verifiable about each model and how they fit into development choices.
Why ‘open’ doesn’t equal ‘harmless’
AI described as ‘open’ can imply a level of transparency comparable to traditional open-source software but that’s a misleading analogy. Open-weight AI typically provides access only to model parameters, not the training data, methodology, or compute processes. Although the partial openness accelerates innovation it also sidesteps key elements of reproducibility and accountability. Because model weights encode all that the model has learned, releasing them can enable a wide range of downstream uses – including manipulative, deceptive or harmful applications. Once weights are in circulation, they can proliferate with almost no friction. There is no mechanism to ‘withdraw’ a model or prevent unconventional fine-tuning.
The benefits of openness therefore need to be weighed carefully against the risks of unconstrained modification and deployment. However, the most comprehensive records of state-actor AI misuse involve closed models, not open ones. Interpreting this requires nuance, as closed providers are well positioned to report on misuse, whereas open source models do not have the same oversight – as the US National Telecommunications and Information Administration (NTIA) acknowledges in its 2024 report, ‘precise estimates of the extent of these risks (especially the marginal risk of open foundation models over other models) are challenging to produce’.
Closed models: strengths and weaknesses
While open-weight AI gathers attention, closed models (like ChatGPT, Claude and Gemini) continue to play a dominant significant in enterprise. Their strengths lie in controlled deployment, integrated safety layers, and ease of integration. Much work has gone in to safety layers by the frontier labs.
But closed development also has weaknesses. Without public scrutiny, safety claims can be difficult to validate. Independent evaluation depends heavily on voluntary disclosures. Closed systems may also concentrate power within a small number of organisations, limiting global participation and slowing scientific progress. A lack of transparency can mask structural biases or obscure decisions that affect trust and accountability. From the perspective of users, deployers, and auditors, open models provide controllability that closed models cannot.
Both open and closed approaches offer value – and carry risks. The challenge is to find a governance model that harnesses what each does well, while mitigating their shortcomings.
The critical role of quantisation in evaluating open and closed models
As the debate evolves, quantisation has emerged as a key factor in assessing the strengths, weaknesses and real-world behaviour of modern AI models. Quantisation reduces the precision of model parameters – for example from 16-bit floating-point values to 8-bit or 4-bit representations – enabling models to run efficiently on smaller devices or with lower compute costs.
For open-weight models, quantisation makes powerful systems portable and inexpensive to deploy, increasing accessibility but also multiplying risk. A heavily quantised version of an advanced model can be run on consumer hardware, bringing high capability into environments with no monitoring or safeguards. Yet quantisation can also subtly distort behaviour. Lower-precision representations may change a model’s reliability, introduce new failure modes or weaken any safety training applied during fine-tuning. This complicates governance, because the safety assurances of the original model may not survive quantisation intact.
In closed models, an arguably worse situation applies. Users of closed models often access them through third-party inference providers, APIs, or tools that may apply their own optimisation. The same quantisation level can produce different performance characteristics depending on the provider. Users of closed models have no visibility into whether, how, where or when quantisation or precision mixing has been applied to the model they are using.
Quantisation therefore widens the gap between capability and control. It enables remarkable computational efficiency, but also makes the assessment of a model’s risk profile far more complex. For organisations evaluating the release of a model – or integrating third-party systems – quantisation must be viewed as a core element of AI assurance rather than a purely technical optimisation.
Open AI models upside: innovation, inclusion, and open science
Despite the risks open-weight AI does carry significant promise. By lowering the barrier to entry, open models democratise access to cutting-edge AI technologies. Researchers, startups, and developers across the globe – including those in emerging economies – can experiment, build, and adapt AI tools to local needs. This broader access mirrors the value creation seen in open-source software and open science. For instance, studies have estimated that open-source software contributes significantly to national economies. The same potential applies to open-weight AI: by fostering innovation, enabling efficient collaboration and accelerating research, open AI models can contribute to both economic growth and societal progress.
And the open-weight ecosystem can foster transparency and scrutiny from external experts and communities, potentially surfacing vulnerabilities, biases, and ethical concerns earlier than closed-source development. But only if this openness is paired with appropriate governance – not as an end in itself, but as a means to enable responsible innovation.
Governance today – patchwork, voluntary, and inconsistent
The governance landscape for open-weight advanced AI is currently fragmented and uneven. There is no common industry standard that dictates when, how and under what conditions a powerful model should be released.
The recently finalised version of the EU AI Act – alongside a voluntary General-Purpose AI Code of Practice – offers some guidance. The Code recommends that all general-purpose models, open or closed, adopt technical safety measures before release. Notably several developers of open-weight models within the EU – including the French company Mistral – have committed to adopting the Code.
Elsewhere, regulatory approaches really diverge. In the United States, for instance, the voluntary NIST AI Risk Management Framework provides general guidance on AI risks – but lacks enforceable mandates specific to open‐weight AI. In some parts of Asia existing regulations tend to focus on content moderation, intellectual property, or user rights – not the unique dangers posed by highly capable open models. Even among AI developers approaches vary widely. Some companies have invested in internal safety protocols others publish little or no documentation or omit public disclosure altogether. The result is a patchwork governance ecosystem that leaves numerous pathways for misuse.
Risk-anchored, tiered openness
The core recommendation of “Beyond the binary” is to abandon the ‘open vs closed’ mindset and instead adopt a tiered, safety-anchored approach. Under this approach:
- Openness should not be assumed simply because the model aligns with open-source ideals.
- Release decisions should be governed by demonstrated safety, contextual risk assessments and readiness to mitigate potential misuse.
- Decision-making should involve a broad coalition – developers, investors, regulators, standard-setters and public institutions – all aligned around shared thresholds of safety, transparency and governance.
In practice, this means that before releasing model weights publicly, developers (or their backers) should conduct rigorous risk assessments, threat-modelling, and safety evaluations. They should document intended and unintended uses, outline mitigations, and – if deploying in sensitive contexts – consider limiting release, access tiers, or usage controls. At the institutional level, there’s a need for common standards, trusted evaluation frameworks, and transparency requirements. Voluntary commitments, while useful, are not enough. Without mechanisms for accountability, audit and enforcement, even the most well-intentioned releases may pose systemic risks.



