The publication of the Stanford University Human-Centred Artificial Intelligence The AI Index Annual Report 2024 offers the most comprehensive overview yet of artificial intelligence’s sweeping impact, mapping a year that cemented AI’s place in science, policy, economics, and public life. No longer a frontier technology, AI has matured into an ubiquitous driver of innovation and disruption. From record-breaking model performance to global regulation and Nobel-winning breakthroughs, the past year signalled not just acceleration – but transformation.
The AI Index’s key findings can be distilled across twelve major themes. While the technical arc of AI continued steeply upward, the broader story was one of diffusion, diversification, and deepening societal implications.
1. Performance keeps climbing – but the frontier is narrowing
AI’s technical performance surged in 2024. On cutting-edge benchmarks like MMMU, GPQA, and SWE-bench, model scores leapt by 18.8, 48.9, and 67.3 percentage points respectively. Models also outpaced humans in time-constrained coding tasks and showed marked improvements in video generation – highlighted by OpenAI’s SORA and DeepMind’s Veo 2.
Yet the field’s leading edge is becoming more competitive and incremental. The skill gap between the top and 10th-ranked language models narrowed from 11.9% to just 5.4%, and the leading two are separated by a razor-thin 0.7%. As the performance ceiling looms, innovation is turning toward efficiency, robustness, and real-world usability.
2. AI is now embedded in everyday systems
Artificial intelligence is no longer confined to research labs. It’s shaping everything from transport to medicine. The U.S. FDA approved 223 AI-enabled medical devices in 2023 – up from just six in 2015. Meanwhile, autonomous transport has gone mainstream: Waymo logged over 150,000 weekly self-driving rides in U.S. cities, and Baidu’s Apollo Go expanded its robotaxi fleet across China.
In science, AI-driven discoveries accelerated. AlphaFold 3 and ESM3 pushed protein structure prediction to new heights, while wildfire prediction models like FireSat and biological agents like Aviary expanded AI’s scientific footprint.
3. Business has embraced AI – and is investing accordingly
After a brief dip, private sector investment roared back. In 2024, U.S. AI investment hit $109.1 billion – nearly 12 times that of China and 24 times the U.K. Globally, generative AI drew $33.9 billion in private capital, up 18.7% from the previous year.
Usage surged too. Some 78% of organisations reported using AI – up from 55% the year before – and 71% used generative AI in at least one business function. Though most companies are still in early adoption phases, AI is already delivering modest but measurable productivity and revenue gains, especially in marketing, supply chains, and software engineering.
4. Research output is booming, with China prolific and the US influential
AI publications nearly tripled from 2013 to 2023, reaching over 242,000 annually. AI now accounts for 42% of all computer science research output. China dominates quantity, contributing 23.2% of AI publications and 22.6% of citations. However, US institutions continue to lead in producing highly cited work, and remain the foremost originators of “notable” models.
Interestingly, 90% of these top models now come from industry, up from 60% just a year ago. Meanwhile, academia still leads in top-tier research output, reinforcing a bifurcation in AI innovation pathways.
5. The cost of using AI plummets as smaller models shine
Thanks to advances in hardware and optimisation, AI usage costs have dropped dramatically. Running a model equivalent to GPT-3.5 (64.8 MMLU score) cost $20 per million tokens in November 2022. By October 2024, the same performance was achievable for just $0.07 – an astonishing 280-fold reduction.
Smaller models are catching up. In 2022, only huge systems like PaLM (540B parameters) could breach the 60% MMLU barrier. By 2024, Microsoft’s Phi-3-mini – just 3.8B parameters – matched it, reflecting a 142-fold efficiency gain.
6. Open models are closing the gap with closed alternatives
The performance divide between open- and closed-weight models shrank dramatically. In January 2024, top closed-weight models outperformed open ones by 8% on the Chatbot Arena Leaderboard. By February 2025, that margin had shrunk to just 1.7%. Open models like Mistral and Falcon are increasingly competitive, with implications for access, governance, and innovation equity.
7. Global regulation and investment have surged
Governments are not just watching artificial intelligence developments – they’re acting. In 2024, 59 US federal agencies issued AI-related regulations – more than double 2023’s total. Globally, legislative mentions of AI rose 21.3% across 75 countries.
Massive state-backed investment followed suit. France pledged €109 billion, China launched a $47.5 billion semiconductor fund, and Saudi Arabia announced Project Transcendence, a $100 billion AI initiative. Meanwhile, international governance mechanisms gained traction, with new safety institutes and frameworks launched across the OECD, EU, UN, and African Union.
8. Responsible AI remains a work in progress
The number of reported AI incidents hit a record 233 in 2024 – up 56% from 2023. While awareness of risks is rising, uptake of responsible AI (RAI) practices remains uneven. Only a fraction of major model developers systematically evaluate for factuality, fairness, or safety. New tools such as HELM Safety, FACTS, and AIR-Bench emerged to fill that void, alongside updated transparency indices. Still, foundational problems persist: large language models continue to exhibit implicit biases, hallucinate facts, and underperform on logical reasoning tasks.
9. Public opinion is warming – unevenly
In a global survey of 26 countries, 18 showed increased optimism about AI between 2022 and 2024. Optimism is strongest in East and Southeast Asia – 83% in China, 80% in Indonesia – while much lower in the U.S. (39%), Canada (40%), and the Netherlands (36%).
Trust, however, is eroding. Fewer people believe AI companies protect personal data (down from 50% to 47%), and concern about AI bias and misinformation is growing, particularly around election deepfakes. Interestingly, despite fears, most people don’t believe AI will replace their job – but do expect it to change how they work.
10. Education is expanding, but gaps remain
Computer science education is on the rise globally. Two-thirds of countries now offer or plan to offer K–12 CS education, up from one-third in 2019. Africa and Latin America showed the fastest growth. In the US, AI-related master’s degrees nearly doubled between 2022 and 2023 However, access remains patchy – especially in regions with limited infrastructure. And while 81% of US CS teachers believe artificial intelligence should be taught in schools, less than half feel equipped to do so, highlighting a need for urgent investment in teacher training and curriculum development.
11. Environmental costs are mounting
As AI models grow, so does their carbon footprint. GPT-3 generated an estimated 588 tons of CO₂ during training. GPT-4 jumped to over 5,000 tons, and Llama 3.1 (405B parameters) crossed 8,900 tons. These figures vastly exceed average annual per-person emissions and raise serious questions about AI’s environmental sustainability – particularly given that compute demands double every five months. Hardware, fortunately, is evolving. Performance is doubling every 1.9 years, costs are falling by 30% annually, and energy efficiency is improving by 40% per year.
12. Science, medicine and ethics are deeply intertwined with AI’s future
AI’s integration into science and medicine accelerated dramatically. AlphaFold 3’s breakthroughs earned DeepMind scientists the Nobel Prize in Chemistry, while foundational work on reinforcement learning won the Nobel in Physics. Medically, AI models now outperform doctors in diagnosing complex conditions, cancer detection, and mortality prediction. Synthetic data is revolutionising drug discovery and privacy-preserving analytics. Yet ethical scrutiny is intensifying: medical AI ethics publications quadrupled between 2020 and 2024.
From potential to presence
The AI Index Report 2025 is a window into a turning point for artificial intelligence. Where previous years have charted the technology’s potential, this past year tracked its presence. Artificial intelligence is now part of the scaffolding of modern society-embedded in our systems, institutions, and everyday lives. What emerges is not a simple binary story of triumph or danger, but of complexity. Technical progress is astounding. Adoption is real. But so too are the challenges: governance, trust, bias, emissions, and inequality.