Menu

Safety v competitiveness: the Paris AI Action Summit

by | Feb 10, 2025 | AI Governance, News

The Paris AI Action Summit, opening today, is the latest in a series of international efforts to shape artificial intelligence (AI) governance. Co-hosted by France and India, the summit will bring together world leaders, technology executives, and researchers to discuss how to balance AI safety with economic and technological competitiveness.

As AI development accelerates, governments are grappling with the challenge of regulating the technology without stifling innovation. The Paris summit aims to move beyond existential risks and focus on practical measures – such as AI’s impact on jobs, open-source development, and environmental sustainability.

A timeline of global AI summits

The Paris summit builds on two global AI governance meetings held in the past 18 months:

November 2023: The Bletchley Park AI Safety Summit (UK)

The UK government hosted the first major international gathering focused on AI safety and catastrophic risks. Discussions centred on advanced “frontier” models – cutting-edge AI systems with the potential to surpass human abilities in certain areas. The Bletchley Declaration, signed by 28 nations including the US, China, and the EU, committed to international cooperation on AI safety.

Key outcomes included:

• Agreement on AI testing and safety evaluations: Governments acknowledged the need for independent testing of frontier AI models to assess their risks before deployment.

• Commitment to government oversight: Leading AI developers, including OpenAI, DeepMind, and Anthropic, agreed to provide governments with early access to their most powerful models for safety evaluations.

• Recognition of AI risks but no binding rules: While the declaration highlighted potential threats from AI, no concrete regulations were introduced.

May 2024: The Seoul AI Summit (South Korea)

The second summit shifted the focus from existential threats to practical governance. While safety remained a core issue, Seoul’s discussions centered on the economic and societal implications of AI.

Key outcomes included:

• Establishment of the International Network of AI Safety Institutes: The network was created to boost cooperation among national AI safety bodies, align research on safety standards, and develop shared testing frameworks.

• Discussions on AI’s impact on jobs and economic competition: Governments debated how to regulate AI without slowing innovation, with concerns over job displacement and monopolisation by major tech firms.

• Continued divide over open-source AI: Some nations, including the US and UK, supported responsible open-source AI, while others, particularly China, pushed for stricter controls.

February 2025: The Paris AI Action Summit (France)

The Paris summit is expected to build on past discussions and move to focusing on practical implementation. While safety remains a central theme, the agenda will include open AI models, sustainability, and economic competitiveness.

Balancing safety and innovation

One of the main tensions at the Paris summit will be how to regulate AI without hindering technological leadership. Some nations advocate for stricter oversight and safety rules, while others worry that excessive regulations could push AI development into unregulated regions.

Key points of debate will likely include:

AI governance frameworks: The summit may explore voluntary guidelines or non-binding agreements rather than formal regulations, given the difficulty of enforcing international AI laws.

• Open-source vs closed AI models: Supporters of open-source AI argue that it democratises access and promotes innovation, while critics fear it could make powerful AI tools more vulnerable to misuse.

• The environmental cost of AI: Large-scale AI models require massive energy consumption. France, with its nuclear-powered energy grid, is attempting to position itself as a leader in sustainable AI.

The US position

The United States is sending a delegation led by Vice President JD Vance to the Paris summit. Notably, the delegation will not include staff from the US AI Safety Institute, the US body established to assess and mitigate AI risks. Instead, representatives from the White House Office of Science and Technology Policy, including Lynne Parker and Sriram Krishnan, will attend. This composition suggests that the US is prioritising innovation and economic competitiveness over safety oversight in its current AI strategy.

China’s DeepSeek and its implications

The AI landscape has been significantly impacted by the emergence of DeepSeek, a startup that has developed the DeepSeek-R1 model. This AI assistant has rivalled Western counterparts like ChatGPT in popularity, becoming the most-downloaded free app on the US Apple App Store. DeepSeek’s rapid ascent underscores China’s growing capabilities in AI and may influence discussions at the Paris summit regarding global AI leadership and the need for collaborative governance frameworks.

What to expect from the summit

While no legally binding agreements are anticipated, a draft Declaration has been circulating prior to the Summit to be signed by all parties (with rumours both the US and UK are declining to sign). Whatever the outcome of the Paris AI Action Summit, it will doubtless influence the global approach to AI development and we hope will balance the need for safety and ethical considerations with the drive for innovation and competitiveness.