Menu

Opt-out data use schemes: ethical implications

by | Apr 16, 2025 | AI Ethics

As governments and companies increasingly rely on data to power artificial intelligence (AI), shape public policy and deliver services, how that data is collected and used takes on enormous ethical significance. A key area is the use of opt-out data use schemes, in which individuals’ data is collected and processed by default unless they explicitly choose not to participate.

Opt-out models are often positioned as pragmatic tools to enable data-driven innovation, especially in sectors like healthcare, research and digital services. However, they also raise concerns around consent, transparency and trust – issues that strike at the heart of ethical data governance.

How are opt-out data use schemes being deployed in the EU, UK and US? What ethical tensions do they reveal? What are the benefits? What are the harms? And what should responsible AI practitioners take away from the current landscape?

Understanding opt-out schemes

In an opt-out model, individuals are presumed to agree to data collection or participation in a particular initiative unless they take action to decline. This is the inverse of an opt-in model, which requires explicit consent before any data is collected or used.

Opt-out schemes are often used where data is considered to serve a public good – such as improving healthcare systems or fuelling scientific research. Proponents argue that they enable large-scale data gathering quickly and efficiently. But critics warn that such schemes risk undermining personal autonomy and informed consent.

The EU: protecting consent under GDPR

The European Union has positioned itself as a global leader in data protection through the General Data Protection Regulation (GDPR), which came into force in 2018. The GDPR enshrines the principle of informed, freely given, specific and unambiguous consent, which has traditionally been interpreted as requiring an opt-in approach.

Under Article 6 of the GDPR, consent must be “an unambiguous indication of the data subject’s wishes.” The European Data Protection Board (EDPB) has repeatedly emphasised that pre-ticked boxes or default consent do not constitute valid consent under the regulation.

Despite this strong position, there are exceptions. For instance, data may be processed without consent if it is necessary for a task carried out in the public interest or for legitimate interests, provided it does not override the rights and freedoms of individuals. This has created space for quasi-opt-out mechanisms in areas like national statistics, epidemiological research, or public service delivery.

In practice, several EU countries have introduced systems where individuals’ health or personal data is shared unless they opt out – albeit with various safeguards. For example, France’s Health Data Hub collects anonymised health data for research purposes, and individuals may opt out under certain conditions. The tension between facilitating research and protecting rights remains a live issue, particularly as governments pursue AI-driven public services.

The UK: contested data practices post-GDPR

Post-Brexit, the UK retained GDPR in the form of the UK GDPR. However, it has signalled intentions to diverge from the EU’s data regime to promote innovation. The Department for Science, Innovation and Technology’s proposed Data Protection and Digital Information Bill (currently moving through Parliament) contains provisions that critics argue could water down consent requirements.

One of the most prominent opt-out schemes in the UK is the NHS’s General Practice Data for Planning and Research (GPDPR) initiative, which seeks to extract patient data from GP surgeries to inform healthcare planning and research. Launched in 2021, the programme faced intense backlash over the lack of patient awareness and consent mechanisms. Many patients and clinicians were unaware of the scheme or confused about how to opt out.

Although the government delayed the rollout and promised improvements, the case underscored the mistrust that can arise from perceived opacity and paternalism. Ethical critics argued that a health data initiative with such wide-reaching implications must prioritise meaningful consent and community engagement – not just technical compliance.

The UK’s inclination to support opt-out approaches in the name of public interest innovation reflects a broader policy trend, but it should be tempered with careful attention to public trust, especially when dealing with sensitive personal data.

The US: fragmented privacy landscape and commercial data flows

The United States presents a different context altogether. Without a comprehensive federal data protection law, the US relies on a patchwork of sector-specific regulations – such as HIPAA for health data and the Children’s Online Privacy Protection Act (COPPA) for children’s data. This creates significant variability in how opt-out schemes operate.

In commercial contexts, US data practices frequently default to opt-out models. Most data brokers and online platforms (Meta, X etc) assume user participation in data collection unless individuals take steps to refuse – steps that are often obscure, difficult or time-consuming.

Recently, several states have passed their own privacy laws, with varying approaches to opt-out rights. California’s Consumer Privacy Act (CCPA), and its successor the California Privacy Rights Act (CPRA), provide residents with the right to opt out of the sale or sharing of their personal data. Colorado, Virginia and Connecticut have followed suit. Still, these laws rely on individuals taking action and often require them to navigate complex systems to exercise their rights.

In response, initiatives like the Global Privacy Control signal are emerging to simplify opt-out requests. But critics argue that placing the burden on users to understand and enforce their privacy rights is an ethical failure – especially when faced with powerful and opaque data ecosystems.

Meanwhile, in government research and health contexts, opt-out models are also present. The All of Us research programme by the National Institutes of Health seeks to gather data from over a million people to advance precision medicine, and while it is officially opt-in, many critics worry about secondary uses of data and the clarity of consent mechanisms.

Benefits of opt-out schemes

Advocates of opt-out models often point to several pragmatic and social benefits:

Greater data inclusivity: Opt-out schemes can generate larger and more representative datasets, reducing biases that arise when only certain groups opt in. This is especially valuable in public health, where underrepresentation can lead to skewed findings and inequitable policies.

Efficiency and scale: Opt-out schemes are more efficient for data collection, particularly when the use case has a clear public interest justification. This can accelerate research and innovation.

Enabling public good: In areas such as medical research, transport planning or epidemiology, using population-level data can lead to breakthroughs that benefit society at large. Proponents argue that ethical use, rather than rigid consent, should be the focus.

Default bias as a design tool: Behavioural science shows that defaults influence decisions. Using opt-out as a default may nudge people into participating in socially beneficial schemes they might support but not actively join.

Concerns over opt-out schemes

Despite these benefits, opt-out data schemes are deeply contested for several reasons:

Lack of meaningful consent: Critics argue that default participation undermines the principle of informed consent. If individuals don’t understand what’s happening or how to opt out, the ethical legitimacy of the scheme is compromised.

Trust erosion: Where people feel misled or excluded from decision-making about their data, public trust can be severely damaged. The NHS’s GPDPR is a case in point – good intentions alone are insufficient without robust public engagement.

Power imbalances: Opt-out schemes often reinforce existing asymmetries between data subjects and data controllers. The ability to opt out may be legally available, but practically inaccessible to many.

Vulnerability and exclusion: Marginalised or less digitally literate populations are less likely to understand or act on opt-out rights, leading to further exploitation or exclusion.

Mission creep: Once data is collected, it can be tempting for institutions to repurpose it for uses beyond the original intent. Without strong governance, this can lead to ethical overreach and regulatory breaches.

The expert view?

The academic literature reflects a wide spectrum of views. Scholars like Luciano Floridi advocate for “data solidarity” models, which argue that individuals have a social responsibility to share data for collective benefit, provided there are robust safeguards and accountability mechanisms.

Others, like Shoshana Zuboff, see opt-out data regimes – particularly in commercial contexts – as emblematic of “surveillance capitalism,” where consent is a myth and personal data is commodified without genuine user control.

Legal scholars such as the Turing Institute’s Professor Lilian Edwards have called for a rethinking of consent altogether in the AI era, suggesting that ethical governance should focus more on purpose limitation, accountability and fairness, rather than individual choice alone.

Public interest organisations tend to support participatory data governance models, in which communities – not just individuals – have a say in how data is used. This collective approach challenges the very foundation of opt-in/opt-out binaries.

Ethical considerations for responsible AI and data governance

For organisations involved in ethical AI, opt-out schemes represent a crucial test of integrity. Merely following legal requirements is no longer enough; ethical leadership demands a higher standard.

Responsible AI practitioners should consider the following:

Transparency: Can users clearly understand how their data is used and how to opt out? Are efforts being made to reach all demographic groups?

Accessibility: Is opting out truly feasible for everyone, regardless of literacy, language or digital access?

Purpose alignment: Are data uses strictly limited to what is necessary and proportionate to the stated public benefit?

Governance and oversight: Are there mechanisms to audit, challenge and amend data use practices?

Community engagement: Are affected communities involved in shaping how their data is used, and in what terms?

Beyond the binary

The debate over opt-out data schemes cannot be resolved through a simple yes-or-no answer. The question is not whether opt-out models are ethical per se, but under what conditions could they be made ethically legitimate.

This demands careful balancing of individual rights and collective goods, robust data governance, and a commitment to building public trust. For organisations committed to ethical AI, this means looking beyond compliance and investing in genuine user empowerment and inclusive data stewardship.

The future of data governance may not lie in opt-in or opt-out models alone – but in more participatory, transparent and accountable systems that reflect the complexity of a data-driven world.