Menu

Building ethical AI: a step-by-step guide for developers

by | Apr 8, 2025 | AI Development

The ethical responsibility of those designing and deploying artificial intelligence (AI) has never been more important. While high-level ethical principles are useful, developers often face a critical question: what does building ethical AI actually look like in practice?

Ethics must be built in – not bolted on

AI systems are only as ethical as the choices made throughout their design, training, and deployment. Recent controversies  – from discriminatory facial recognition to opaque hiring algorithms – have shown how ethical oversights can damage reputations, harm users, and result in regulatory backlash. But ethics isn’t just a risk-avoidance strategy. When embedded early and consistently, ethical AI leads to better, more inclusive, and more trustworthy AI. It improves user experience, reduces failure rates, and builds long-term confidence in emerging technologies. Here’s our step-by-step ethical AI development process:

Step 1: define the purpose  – and interrogate it

Ask: why are we building this system, and who is it for? Before any code is written, establish the intended use, users, and context of the AI system. This includes:

  • Purpose clarity: What problem does the AI solve? Is AI actually the best approach?
  • Stakeholder analysis: Who benefits? Who might be harmed?
  • Impact scope: What downstream effects are likely?
  • Red flag: If the main goal is efficiency or automation without consideration of human impact, revisit the project’s foundation.

Example: An AI designed to prioritise patients in A&E might optimise for throughput but ignore clinical urgency, introducing harm.

Step 2: assemble a diverse, multidisciplinary team

Ethical blind spots are often a result of homogeneous teams. Including people with diverse professional backgrounds, life experiences, and social identities helps anticipate and avoid harm. Key team members should include:

  • Developers and data scientists
  • Domain experts (e.g. healthcare professionals for medical AI)
  • UX designers
  • Ethics advisors
  • People with lived experience of marginalisation

Engaging these voices early reduces ethical debt later. As the adage goes: ‘nothing about us, without us’.

Step 3: source and curate data responsibly

Biased data leads to biased systems. To ensure fairness and accuracy:

  • Conduct a data audit:
  • Where does your training data come from?
  • Who is represented, and who is missing?
  • Use balanced datasets that reflect real-world diversity in race, gender, age, disability, etc.
  • Document data lineage: who collected it, how, and under what assumptions?

Tools like Datasheets for Datasets (Gebru et al.) and Data Nutrition Labels help provide this transparency. Avoid scraping online forums or social media without consent  – even public data can carry private context.

Step 4: choose transparent and interpretable models

Complex models (e.g. deep neural nets) can be powerful but opaque. Depending on the application, more interpretable models may be preferable.

  • Use explainability tools (e.g. LIME, SHAP) to understand feature influence.
  • Provide model cards that describe what a model does, its limitations, and test performance across different groups.
  • For high-stakes use cases (healthcare, law, finance), consider interpretable-by-design models.

Pro tip: Transparency isn’t only for users  – it also helps your own team troubleshoot and improve systems more effectively.

Step 5: build in fairness and audit for bias

Building ethical AI isn’t just about avoiding bias – it’s about actively promoting fairness.

  • Use fairness metrics appropriate to your context (e.g. demographic parity, equal opportunity).
  • Test outcomes across intersectional demographics (e.g. not just “gender”, but “Black women”, “older men”, etc.).
  • Set thresholds for acceptable disparities – and address them before deployment.

Regular algorithmic audits (internal or external) can identify drift or unintended discrimination over time.

Example: A credit scoring model may perform well on aggregate, but disproportionately reject applicants from marginalised postcodes.

Step 6: design for human oversight and contestability

Ethical AI systems do not replace human judgment – they support it.

  • Clearly identify decisions that require human-in-the-loop intervention.
  • Make outputs contestable: users should be able to challenge or appeal decisions.
  • Use confidence scores or uncertainty measures to indicate when the model is unsure.
  • Provide plain-language explanations for end users.

Involve frontline staff or service users in testing oversight mechanisms. If they can’t understand or act on outputs, rethink your approach.

Step 7: protect user privacy and data rights

Privacy is both a legal requirement and an ethical necessity. Developers building ethical AI should:

  • Minimise data collection to only what’s necessary
  • Use techniques like differential privacy, federated learning, or synthetic data when possible
  • Provide clear consent processes and explain data usage
  • Offer opt-out or data deletion mechanisms
  • Adopt Privacy by Design principles, and document data protection efforts for internal and regulatory review.

Step 8: test with real users  – including edge cases

Testing isn’t just for technical bugs – it’s an ethical safeguard.

  • Conduct user testing with people from diverse backgrounds, including those with disabilities or low digital literacy
  • Watch for unintended consequences, usability issues, or accessibility failures
  • Stress-test for misuse: how could this system be exploited, gamed, or weaponised?
  • Real-world testing should simulate edge cases, not just typical users. Bias often hides in the tails of the distribution.

Step 9: document everything

Transparency builds trust – both inside and outside your organisation. Maintain documentation that includes:

  • Design rationale
  • Dataset summaries and known gaps
  • Model performance, including limitations
  • Fairness and bias audit results
  • Ethical considerations and how they were addressed

Useful tools: Model Cards for Model Reporting, Google’s AI Model Documentation Guidelines.

Step 10: monitor, update, and iterate post-deployment

Ethical AI is never “done.” Once deployed:

  • Monitor performance and fairness metrics over time
  • Set triggers for retraining, auditing, or shutdown if harm is detected
  • Provide ongoing channels for user feedback and redress
  • Regularly review the social and ethical context  – what’s acceptable today may not be tomorrow
  • Ethical maintenance is just as important as ethical design.

Beyond the checklist: shifting mindset and culture

Ethical AI development isn’t just about following steps  – it’s about embedding a culture of critical reflection, humility, and responsibility within teams and organisations. Some guiding principles:

  • Transparency over perfection: It’s OK to admit limitations
  • People over performance: Prioritise wellbeing and rights
  • Feedback over assumptions: Let lived experience guide you
  • Accountability over automation: Know who answers for outcomes

These values need to be reinforced not only in tech teams, but also by leadership, procurement officers, and business strategists. tools and frameworks to support ethical development. Some practical resources developers can use:

These tools can help translate high-level principles into concrete practices and improve collaboration between technical and non-technical stakeholders.

Developers hold real ethical power

Developers are not just technical implementers  – they are designers of digital influence. The systems they build can reinforce or challenge inequality, enable inclusion or entrench exclusion. By adopting an ethical mindset and a methodical approach to AI development, developers can help create tools that are not only powerful but also just, accountable, and socially beneficial. As the field matures, future-proof AI will be ethical AI.