Menu

26 AI predictions for 2026

by | Dec 8, 2025 | AI Insights

This year artificial intelligence (AI) continued its march into workplaces, homes and the public sector with geopolitical and economic forces swirling and shaping its development and adoption. Following on from the predictions we made for this year, our AI predictions for 2026 cover our bets on the main developments in 2026 and beyond:

1. Export controls reshape global AI race 

The divisive policy debate over whether tightened chip export controls increase or undermine US competitiveness creates further uncertainty. Meanwhile, Chinese investment in domestic chip capacity increases.

What this means: Compliance frameworks must account for evolving export restrictions, while preparing for potential retaliatory responses. 

2. Hard AI regulation arrives

The EU AI Act introduces enforceable obligations for high-risk systems from August 2026, with sector-specific regulators tightening rules elsewhere.

What this means: compliance, documentation and risk controls must be built into the design stage of any AI-enabled product.

3. AI energy consumption faces growing public and political scrutiny.

The surge in data centre construction draws increasing attention to AI’s energy footprint. Real increases in public energy bills in regions facing grid constraints generate local opposition.

What this means: Organisations deploying AI must factor energy sourcing and carbon disclosure into infrastructure planning, both in terms of operational risk and brand exposure.

4. Chinese-US competition leads to further efficiency innovation

Due to US export controls, Chinese labs have developed highly efficient training approaches that achieve comparable performance at dramatically lower cost. The gap between US and Chinese models closes, and enterprise adoption of Chinese models increases.

What this means: Procurement and risk teams must evaluate Chinese alternatives on technical merit while managing security and data sovereignty risks.

5. Open source becomes a digital asset for digital sovereignty

Recognition of the strategic value of the open source ecosystem increases in Europe and the Global South. The performance gap between open and closed source models narrows, while increased open source adoption supports the development of tools to help with deployment challenges through the open source ecosystem.

What this means: Enterprises should consider building the capability to deploy open weight models, reducing vendor dependency and maintaining data sovereignty.

6. Inference costs continue their exponential decline

Competition from Chinese providers intensifies the trend of reducing costs, with inference costs dropping globally. 

What this means: Organisations should now consider shifting priorities from minimising inference cost to considering use cases that were previously uneconomic, in order to maximise value.

7. AI infrastructure becomes exposed to energy market volatility

However, as inference workloads scale and data centres proliferate, AI operating costs become increasingly tied to electricity prices. Energy shocks could rapidly alter the economics of AI services.

What this means: Organisations should stress-test against energy price volatility. Geographic diversification of compute becomes a strategic consideration.

8. Synthetic data and data markets become increasingly prominent

Data scarcity for frontier model development, privacy regulatory  risks, and the need for edge-case coverage all push growth in the synthetic data market. Major players will continue to snap up synthetic data specialists.

What this means: Firms investing in AI must develop or procure synthetic data capability, particularly in regulated sectors where real data is legally-constrained or insufficient for model robustness.

9. AI becomes the standard layer of enterprise software

Vendors embed AI assistants, multi-agent systems and automation directly into CRMs, ERPs and productivity suites.

What this means: organisations must assume AI functionality is native and redesign processes to take advantage of it.

10. AI spending passes the US$2 trillion mark

Analysts project global AI-centric investment will top the $2 trillion mark next year, largely in cloud training, inference hardware and data platforms.

What this means: capital planning must prioritise AI-compatible architecture or risk losing competitive ground.

11. AI safety becomes a board-controlled risk domain

Deepfake fraud, model poisoning and synthetic identity attacks rise sharply, pushing companies to adopt provenance systems and robust testing.

What this means: governance frameworks, AI assurance and board reporting on AI safety become as critical as traditional cyber risk.

12. Agentic AI automates multi-step processes

Organisations deploy autonomous-but-supervised AI agents to schedule tasks, handle service requests and orchestrate workflows.

What this means: monitoring, guardrails and escalation paths are required for safe deployment of chains of interacting agents.

13. Embodied AI gains adoption in logistics and operations

Robots, drones and AI-assisted machines see widespread uptake in warehouses, manufacturing lines and inspection tasks.

What this means: firms need integrated digital–physical governance covering safety, maintenance and workforce transition.

14. AI-driven search replaces traditional navigation

Internal and external search tools offer AI summaries instead of long pages of links, reshaping how employees and customers find answers.

What this means: information architecture must be optimised so AI systems can surface correct, brand-safe responses.

15. Debate over ‘generalist’ AI intensifies

More capable multimodal models trigger discussion about general-purpose reasoning, though true AGI remains elusive. Scale-dependent approaches become increasingly challenged with the revival of neurosymbolic approaches and lean recursive reasoning models.

What this means: scenario planning for rapid capability jumps becomes prudent without betting strategy on an AGI timeline.

16. AI-driven fraud detection becomes universal

Payment networks and banks deploy behavioural, multi-modal anomaly detection that stops billions in fraud attempts.

What this means: risk teams must modernise platforms and provide explainability to regulators reviewing automated decisioning.

17. Automated financial advice reaches mass adoption

Hybrid AI-advisers guide millions of customers through investments, pensions, tax optimisation and savings plans.

What this means: firms need transparent advice frameworks that balance algorithmic recommendations with human reassurance.

18. AI decision-support becomes foundational to defence operations

Platforms integrate sensor, satellite, drone and intelligence data to support rapid situational awareness and mission planning.

What this means: defence organisations must ensure rigorous testing, robust data pipelines and resilient systems to avoid misinterpretation under pressure.

19. Autonomous systems in defence expand under strict human oversight

Uncrewed vehicles – especially underwater –  gain more autonomy in navigation, detection and early threat identification but remain human-controlled for escalation or engagement.

What this means: policy, training and procurement must enforce clear boundaries that preserve lawful, ethical oversight.

20. Phones evolve into proactive personal concierges

Top devices now ship with assistants that manage calls, messages, booking tasks and on-screen actions autonomously.

What this means: customers expect the same fluid, conversational service from every digital touchpoint.

21. AI tutoring becomes a mainstream tool

Adaptive learning apps provide personalised guidance for schoolwork, languages, coding and workplace training.

What this means: employers should incorporate AI-supported learning paths into skills and development programmes.

22. Digital wellbeing assistants mature

Mental-health and wellbeing apps use AI to personalise suggestions based on tone, sleep patterns and stress markers.

What this means: organisations can extend wellbeing support, but must adopt strict privacy boundaries around sensitive data.

23. Personal finance becomes conversational

AI budgeting and cashflow assistants help users avoid overdrafts, track habits and receive real-time financial nudges.

What this means: consumers will expect financial brands to offer proactive, personalised financial guidance.

24. Shopping journeys are supported by AI retail agents

Online shoppers use conversational assistants to compare products, check compatibility and discover relevant alternatives.

What this means: retail data must be rich, structured and transparent so AI agents recommend products accurately.

25. AI-designed drug candidates progress into mid-stage trials

AI-generated molecules for diseases such as pulmonary fibrosis and certain cancer subtypes reach Phase II, validating AI-driven R&D models.

What this means: pharmaceutical pipelines need workflows built around rapid iteration, simulation and early feasibility testing.

26. Early disease detection improves further through AI

AI systems for mammography and diabetic retinopathy screening achieve strong accuracy and become viable for wider clinical rollout.

What this means: health systems integrating AI into screening pathways, must ensure clinician involvement, validation and traceable decision-support.

At EthicAI we’re here to help you make sense of the rapidly-developing AI landscape, to talk to us about our AI assurance platform and our consultancy and advisory services, please get in touch.