The law locks up the man or woman
Who steals the goose from off the common
But leaves the greater villain loose
Who steals the common from off the goose.
18th C, ANON
Increasing warnings from the financial press about another AI winter, or market crash, are misguided. In recent months, the bubble discourse has intensified, with comparisons increasingly drawn between current investment and valuation patterns and financial dynamics of the dotcom bubble and subsequent crash. Apollo’s Torsten Slok argues that the AI bubble now exceeds the internet bubble, with the top 10 S&P 500 companies more overvalued than in the 1990s. An MIT finding that 95% of AI pilot projects fail to yield meaningful returns despite over $40 billion in generative AI investment sent tech stocks tumbling. Despite bubble warnings, Big Tech increased 2025 AI spending from $325 billion to $364 billion.
The reason today’s boom won’t follow traditional bust patterns is not that today’s AI is more technically advanced than in earlier eras, but that the economic and political conditions sustaining it have changed. Traditional AI winters were triggered by capital flight, when governments, corporations, or investors withdrew funding. That mechanism has weakened.
The current AI boom is less a free market cycle than a form of digital enclosure. Just as the enclosure of common land transformed medieval agriculture by turning shared resources into landlord rents, platform companies have converted digital activity into vast recurring revenue streams. These ‘digital rents’ fund enormous internal surpluses that insulate AI expansion from market discipline.
Two further reinforcing dynamics provide further insulation.The main actors are investing in AI not for productivity gains but to secure control of potential future tollbooths through which economic activity could flow. Finally, AI infrastructure is now embedded in national security and strategic planning.
This enclosure dynamic does not guarantee endless growth – but it does alter the plausible outcomes. A sudden winter driven by external capital withdrawal is far less likely than structural, political, or environmental corrections.
This post is the first of three articles, examining how digital enclosures have shaped AI development.
- This is no AI bubble – here’s why.
- Why do we have the type of AI we have?
- What happens if productivity doesn’t materialise?
Bubbles and winters
Marginal image from the Luttrell Psalter – England [East Anglia]; circa 1325-1335
AI winters happened when external capital was withdrawn. This mechanism – the sudden retreat of funding from disappointed sponsors – has driven every major collapse in AI development (Hendler, 2008). There have been two generally recognised AI winters, with several smaller downturns along the way. The first lasted from 1974-1980, and the second from 1987 through the 1990s (Russell and Norvig, 2021).
The term ‘artificial intelligence’ was coined in the proposal for the 1956 Dartmouth Summer Research Project. The proposal contained the following portentous extract:
‘An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer’ (McCarthy et al., 1955).
Early successes in this period maintained these grand expectations: Newell and Simon’s (1956) Logic Theorist proved theorems from Principia Mathematica; McCarthy invented LISP (1958), which became AI’s canonical programming language for decades; Rosenblatt’s perceptron (1958) is foundational to modern day neural networks; Weizenbaum’s ELIZA (1966) passed the Turing test.
The first winter took place in the early 1970s, when sponsors concluded achievements were not meeting expectations. The 1966 ALPAC report found that machine translation was still slower, less accurate and more expensive than human translators. DARPA concluded that speech recognition failed to meet military needs, so cut funding (Crevier, 1993). In Britain, the 1973 Lighthill Report concluded that AI was incapable of solving the ‘combinatorial explosion’ that made symbolic systems collapse outside of narrow ‘toy problems’. Government funding collapsed on both sides of the pond, and the first winter began in 1974.
But at no point has development and research fully ceased. Through the 1980s, a commercial boom in ‘expert systems’ led companies like DuPont, GM, and Boeing to invest heavily, while specialist firms like Symbolics and Lisp Machines Inc. sold specialised hardware for hundreds of thousands of dollars per unit.
The new funding model of corporate, rather than government, sponsors also proved vulnerable to withdrawal. The rules-based expert-systems proved to be brittle and expensive to maintain. Apple and IBM undercut specialised hardware with more powerful and fractionally expensive general-purpose alternatives. A sudden collapse in the market for specialised AI hardware in 1987 led to a wave of bankruptcies, while DARPA’s Strategic Computing initiative faltered, setting in the second AI winter.
A quieter period stretched through the 1990s. Neural networks were kept alive by Hinton, LeCun, and Bengio – backpropagation had been rediscovered in 1986, but compute and data were still insufficient. Other machine learning techniques emerged – SVMs, random forests, gradient boosting – which have nowadays been recast from ‘statistics’ to ‘AI’. Venture capital remained reticent, given long development horizons and uncertain returns.
A decisive thaw came in 2012 with AlexNet, developed by a Toronto-based research team (Krizhevsky et al., 2012). Leveraging GPUs to train a deep convolutional neural network, AlexNet crushed the ImageNet benchmarks, slashing 10 error rate percentage points compared with second place. AlexNet was a ‘key moment in AI history’ (Whittaker, 2021), demonstrating that supervised machine learning could become extraordinarily effective when combined with sufficient compute and massive labeled datasets. The AlexNet paper has over 140,000 citations. As it happened, the resources this required – vast data corpora, massive compute, and existing user bases for data collection – were precisely what platform companies already possessed, and could further expand. Thus began our era of deep learning.
All previous AI winters followed the same mechanism. AI development depended on external capital that could be withdrawn, whether government sponsors, corporate customers, or venture capitalists. Winters were not inevitable properties of the technologies, rather a property of the funding and institutional structures supporting them.
Enclosure
Marginal image from the Luttrell Psalter – England [East Anglia]; circa 1325-1335
External capital discipline no longer applies in the same way, due to the enclosure dynamics that have created our contemporary AI cycle. To understand why, we need to examine how platform companies generate and deploy capital, best understood through the lens of digital ‘enclosure’ (Srnicek, 2016; Zuboff, 2019).
Borrowing from Varoufakis’ concept of technofeudalism (2023), digital enclosure is the process by which platforms convert dispersed user activity into recurring, collateralisable revenues (advertising rents, marketplace fees, subscription services, cloud margins). These recurring streams create internal surpluses that can underwrite long-term capital expenditure.
From the 12th to the 19th centuries, English agrarian society was transformed by the ‘enclosure’ of common land. Fields once governed by customary rights – where peasants grazed animals or cultivated subsistence crops – were progressively fenced off, legally redefined as private property, and repurposed for profit (Neeson, 1993). The parliamentary Inclosure Acts of 1760s-1820s accelerated this process, enclosing more than a fifth of England’s land (Allen, 1992).
Not only was more land privatised, but was also consolidated to fewer landlords – in 1786 there were 250,000 independent landowners, but over the course of only thirty years this number was reduced to 32,000.
The central transformations were
- The legal redefinition of rights
- The conversion of commons into new streams of rent
- The creation of new social and economic relations of dependence
Enclosures allowed landlords to engage in more profitable agricultural methods, and new streams of rental income allowed landlords to justify and subsidise upfront capital investments (McCloskey, 1972; Turner, 1986).
Recent scholarship (Boyle, 2003) suggests the effect of enclosure was primarily redistributive rather than productive. As economic historian Robert Allen demonstrates, enclosure “produced major distributional consequences, but little observable efficiency gain” (Allen, 1982, p.939). The movement was justified with efficiency arguments, while its main effect was to redistribute income to already rich landowners. Similar dynamics may be at work in digital enclosure – the value extracted through platform rents may exceed the value created.
Digital platforms create new proprietary spaces rather than privatising existing commons. Google didn’t privatise search, rather created a search monopoly. These platforms offer genuine utility; Google organises information, Amazon simplifies commerce, and Apple provides seductive computing experiences – while they also extract value from every interaction within their domains (Srnicek, 2016).
The rental mechanisms vary. Apple and Amazon take direct fees of 30% on app stores and marketplace commissions. Google and Meta harvest user data to sell targeted advertising, and Microsoft bundles productivity software into subscriptions. All convert user activity into recurring revenue streams, while more literal rental incomes come from cloud services. Users generate value simply by existing within these systems, producing data, content, and network effects that platforms monetise (Zuboff, 2019).
Unlike historical enclosure, where peasants were forced off land through legislation and direct force, users voluntarily join digital platforms, and can voluntarily leave. Different coercive forces are at work;
- Network effects (the cost of not being on the platform increases as others join)
- Institutional integration (work requires linkedin, owned by Microsoft)
- Attention engineering makes exit psychologically difficult
- Technological lock-in (data, relationships, and workflows become trapped) (Helmond, 2015; Srnicek, 2016)
This economic model explains why platforms can sustain AI development despite poor returns. Platforms convert dispersed user activity into recurring, collateralisable revenue. The result is not simply market concentration, but the conversion of digital space into streams of rent that can be ploughed back into AI development, regardless of whether said development generates returns.
Monopolies
Marginal image from the Luttrell Psalter – England [East Anglia]; circa 1325-1335
Standard monopoly analysis would explain Big Tech’s AI investments as: market concentration creates pricing power, leading to excess profits which then fund speculative ventures. This omits a crucial factor in how platforms generate and deploy capital.
A monopolist extracts value by raising prices on products above competitive levels. Platforms extract value differently, not through product pricing but through rent on digital territory. Google doesn’t charge users for search, but collects rent from advertisers for access to attention. Amazon’s marketplace fees are essentially tolls for accessing customer relationships. Apple’s 30% App Store cut is rent for existing in their ecosystem. This form of value extraction of course requires a monopoly – but the method of value extraction differs.
This rent-seeking dynamic explains why poor AI productivity doesn’t deter investment, unlike in previous AI winters. The now classic Boston Consulting Group (BCG) study found that 95% of firms globally fail to achieve AI value at scale; 60% report minimal revenue despite significant; and 35% are somewhere in between (BCG, 2025).This might seem like typical teething problems for emerging technology (as seen with railroads, electrification etc. (David, 1990). But if platforms are competing to enclose new digital territories rather than to sell productive software, then immediate productivity gains become secondary – what matters more is securing the position at tomorrow’s tollbooth (Varoufakis, 2023).
This changes competitive dynamics. In a traditional market, firms compete to offer better products at lower prices. In an enclosure race, firms compete to control territories where they can extract rent regardless of productivity improvements. Microsoft’s multi-billion dollar OpenAI investment makes little sense as a bet on productivity software. It makes sense as an attempt to own the infrastructure through which future economic activity must pass.
Historical enclosure offers a precedent: landlords invested in fences, drainage, and legal battles not because these immediately improved yields, but because they secured permanent rental streams. Similarly, platforms pour billions into AI not for immediate returns but to establish the boundaries of future digital enclosures. The question isn’t whether AI improves productivity but whether it can be embedded deeply enough into economic life to generate rents [more on this in article 3].
Insulation
Marginal image from the Luttrell Psalter – England [East Anglia]; circa 1325-1335
Three interlocking dynamics insulate current AI development from the winter mechanism that collapsed previous cycles.
Internal surplus
Unlike previous AI booms funded by external capital, platforms finance AI from internal surpluses generated through digital enclosure. The scale dwarfs anything in technology history.
- Alphabet/Google held roughly $95 billion in liquid assets as of Q2 2025, with quarterly revenues of $96.4 billion and Google Cloud growing 32% year-on-year. It raised capital expenditure forecasts to $85 billion for AI and cloud infrastructure (Alphabet IR, 2025).
- Apple maintained $133 billion in cash and securities in Q3 2025, with $27.4 billion in quarterly services revenue (a 13% YoY rise) and a new $100 billion buyback program (Apple, 2025).
- Amazon reported $121.1 billion in operating cash flow (TTM, Q2 2025), with AWS revenues of $30.9 billion (+17.5% YoY). Even with declining free cash flow due to capital intensity, AWS remains its profit centre, underwriting continuous reinvestment (Amazon, 2025).
- Meta ended Q2 2025 with $47.1 billion in cash and posted $47.5 billion in quarterly revenue (+22% YoY), with a raised capex of $66–72 billion for AI infrastructure. Its 43% operating margin demonstrates the capacity to sustain long-term negative AI cashflows (Meta, 2025).
Together, these firms control over $300 billion annually in AI-related capital expenditure, more than the entire venture capital industry deploys globally, and far surpassing the U.S. federal R&D budget. Meta’s 43% operating margins produce such massive cashflows that it can commit $66-72 billion to AI infrastructure despite uncertain returns. Amazon demonstrates the model’s patience: AWS operated at a loss for a decade, cross-subsidised by retail revenues until market dominance was achieved. These are not speculative bets requiring external validation – they are funded internally from renewable rent revenue.
Privately held firms at the AI model layer, such as OpenAI and Anthropic, have also secured investor commitments at a scale unprecedented in the history of computing. OpenAI’s late-2024 funding round valued the company at approximately $157 billion, later marked up toward $300 billion in private-market transactions reported in early 2025 (Financial Times, 2025).
At the infrastructure level, Nvidia’s $100 billion commitment to supply compute to OpenAI in 2025 reflects a willingness to channel national-scale capital expenditure into AI data centres with uncertain profitability (TechSpot, 2025). These trajectories involve multi-billion dollar annual losses that are nevertheless accepted by investors and strategic partners as tolerable, given the expectation of market consolidation and infrastructural indispensability.
The macroeconomic implications are staggering. Deutsche Bank calculates that AI-related capital expenditure accounted for 92% of US GDP growth in the first half of 2024 (Deutsche Bank, 2025), and that the US would otherwise be close to recession.
Previous AI winters were caused by fragile external funding receding. Today, this mechanism no longer operates – exogenous capital discipline does not apply in the same direct way, as the developers and sponsors are the same entities. Western AI expansion is now underwritten by enormous cash reserves, recurring rental incomes, and platform lock-in controlled by a handful of private firms. This structural change, not any technical breakthrough, explains why a 1970s-style AI winter is far less likely to occur in today’s landscape.
This structural shift is notable when it comes to previous crashes. The telecoms boom required massive debt financing – when credit dried up, investment ceased immediately. The housing bubble depended on leveraged securities that forced liquidation when values fell. But as Perkins (Financial Times, 2025) notes, current AI investment is largely unleveraged – funded from free cash flows rather than debt. Even if AI generates poor returns, there’s no forced deleveraging, cascade of margin calls, or fire sales. Companies spending their own surpluses can simply continue spending.
Defensive infrastructure
The second insulation mechanism is defensive competition over future tollbooths. Microsoft frames its OpenAI investment not as a productivity bet, but as essential to ‘winning the AI race’. This is defensive positioning to control infrastructure through which future economic activity must pass, not the offensive logic of capturing market share through better products.
The BCG (2025) finding that most firms see little value from current AI doesn’t deter investment because platforms aren’t competing to sell productive software, but are racing to own tomorrow’s tollbooths – the APIs, platforms, and infrastructure layers that will extract rent from whatever AI becomes [see article 3]. Historical enclosure offers the precedent, as landlords invested in fences and legal battles not for immediate yield improvements but to secure permanent rental positions (Varoufakis, 2023).
This defensive logic creates its own momentum. Google must match Microsoft’s investment regardless of returns because the risk of being excluded from future infrastructure outweighs the cost of current losses. Meta must build its own models rather than depend on potential rivals. The result is an arms race where withdrawal means not just missing profits but potentially being locked out of the next generation of digital infrastructure.
Geopolitics
The third insulation layer comes from AI’s infiltration into strategic priorities and infrastructures. AI and the infrastructure which underpins it is now securitised – treated as critical to national security and essential infrastructure. This includes defence systems, but also healthcare networks vulnerable to cyberattack, and economic sovereignty concerns. AI is now folded into U.S.-China competition, integrated into defence and healthcare infrastructure, and treated as a strategic asset by state actors. Palantir’s cumulatively multi-billion contracts with NATO-aligned militaries and the NHS, the Alan Turing Institute’s new exclusive focus on national security, or Nvidia’s active promotion of ‘sovereign AI’ projects in Europe, exemplify this dynamic. Once framed as strategic infrastructure, retrenchment – whether by private firms or governments – becomes politically costly (but not impossible). States could become, in effect, backstops to private spending.
Palantir has effectively repositioned itself as the operating system for Western militaries and security agencies. In 2024/2025 it signed multi-billion dollar contracts with the US Department of Defense, UK Ministry of Defence, and NATO partners (Palantir IR, 2025). Unlike the brittle expert system vendors of the 1980s, Palantir’s AI-driven logistics, intelligence, and battlefield modelling tools are now embedded in national defence infrastructures. Once war-fighting, procurement, and intelligence planning depend on Palantir, governments must guarantee its continuity, regardless of profitability.
20th century state investments in computing, such as Japan’s Fifth Generation Computer Systems (FGCS) project (1982-1992), France’s Plan Calcul (1966-1976), or the Soviet computing industry, were offensive investments, attempting to build new capabilities from scratch. When these projects failed to deliver competitive advantage and costs spiralled, states abandoned them, as nothing critical depended on these systems yet. Big Tech have already built the infrastructure, and now seek to make states embed AI infrastructure into defence, healthcare, and intelligence systems. This is defensive lock-in, not offensive development, which makes retreat more operationally and politically costly than abandoning R&D projects.
The EU AI Act, heavily lobbied by Big Tech firms, contains key exemptions for defence applications, which are effectively exempt from regulatory scrutiny. Meanwhile, stricter civilian regulation could create a competitive disadvantage between European AI startups and research labs, when compared with American providers who already have the scale to navigate regulatory complexity. Regulatory sovereignty therefore could risk deepening technological dependence.
China alone appears to recognise the nature of this enclosure. Under the “New Generation AI Development Plan,” Chinese firms such as Baidu, Alibaba, Tencent, and Huawei receive direct subsidies, cheap credit, and privileged access to national data resources. These projects are not evaluated against short-term commercial return but against geopolitical parity with the U.S.. China is the only entity to successfully fully disentangle AI dependency with Western platforms and infrastructure, building domestic chip fabrication, software stacks, and cloud structure (Chorzempa, 2022).
In contrast, the UAE (via G42 and the Technology Innovation Institute) and Saudi Arabia (via PIF and Aramco Digital) are converting hydrocarbon rents into AI infrastructure, framing this as economic diversification (Financial Times, 2025a). But these sovereign funds have invested directly in OpenAI, Anthropic, and Nvidia supply chains, while simultaneously building domestic compute clusters, funding AI-related infrastructure as a strategic asset.
Through the National AI Mission and partnerships with Reliance Jio and Infosys, the Indian government has launched state-backed AI infrastructure projects, committing ₹10,000 crore to build sovereign AI computing infrastructure. This anchors growth in public-private partnerships with guaranteed procurement and subsidised costing, where state backing substitutes for market discipline.
Emerging as the indispensable GPU provider, the shovel-seller in a gold rush, Nvidia is repositioning itself as an architect of national policies. Nvidia is urging nations to build ‘sovereign AI’, domestic AI capabilities that, funnily enough, depend entirely on Nvidia’s hardware, U.S. licensing, and Western cloud infrastructure.
Sovereign AI projects across Europe and the Gulf subsidise their continued dependence on American cloud infrastructure, Nvidia chips, and foundation models, while framing this as autonomy and maintaining strategic advantage. Yet as Chorzempa ((2022) observes, actual sovereignty would require independent chip manufacture, data infrastructure and model development of the kind only China pursues – costly multi-decade investments through SMIC, Huawei’s chip design, and mandated use of domestic cloud services..
The analytical point is clear: a new stream of AI investment has emerged that has become largely decoupled from market discipline. Platforms in the U.S. cross-subsidise with digital rents; European governments entrench AI through infrastructure dependencies; China deploys state capital to preserve technological sovereignty; Gulf states recycle petroleum wealth into compute infrastructure. The lever that created previous winters, external capital withdrawing when returns disappoint, no longer functions cleanly when capital comes from sources that evaluate returns through a strategic, not financial, lens – and when this investment is defensive, not offensive.
Moreover, the physical infrastructure underpinning AI deployment overlaps with existing strategic assets. Undersea cable laying, chip fabrication plants, and data centres all blur the line between private investment and strategic infrastructure. State actors are less likely to permit large-scale retrenchment when national competitiveness and critical infrastructure are at stake (Aziz & Vallas 2024).
Strategic suppliers (chipmakers, cloud partners, sovereign investors) have become integrated into financing structures (deals where chip suppliers buy equity or leaseback arrangements). The constellation of commercial commitments (e.g., multi-billion chip supply commitments) means that even if one private vehicle struggles, the ecosystem will reallocate capacity rather than cease. Recent reporting about multi-party deals (OpenAI/Nvidia/Oracle/SoftBank/CoreWeave) shows the financing chain is now industrial, not purely financial.
Of course, labelling something as ‘strategic’ provides temporary insulation but doesn’t guarantee indefinite support. States have repeatedly abandoned technologies deemed ‘strategic’ when costs spiral against returns – e.g. the abandonment of the Soviet Union’s computing industry, Strategic Computing Initiative (the U.S. military’s response to Japan’s civilian FGCS) despite treating it as a military priority.
Different dynamics are at play today. Big Tech is rapidly expanding infrastructure, funded primarily through platform revenues. Meanwhile, states are making varied choices. China is pursuing complete independence; Europe, regulatory sovereignty while relying on foreign infrastructure; the U.S., through the dominant private sector; and smaller nations mixing domestic development with international partnerships.
The resulting dependencies are real, but not absolute. Integration with Palantir/Azure creates switching costs – but this does not necessarily amount to complete capitulation from nation states. However the complex ecosystem created when state actors involved are multiplied, each with different approaches, means that single-source funding collapses of the past appear less likely.
Conclusion
Marginal image from the Luttrell Psalter – England [East Anglia]; circa 1325-1335
The core claim made here is narrow and empirical: that the mechanism that caused rapid collapses in earlier AI cycles – a sudden withdrawal of external funding – is weaker today, because the funding and political structures involved have changed. This is not to say that another AI winter cannot occur. Monopolies have been broken up, empires eventually collapse, and governments walk away from funding.
Even if valuations crash, the structural insulation described above prevents an AI winter of the kind seen in the 20th century. Platforms have the means and incentives to sustain or consolidate AI assets rather than abandon them. Several scenarios present themselves:
- Technofeudal equilibrium: platforms keep subsidising AI because it protects future tollbooths (mediating economic and state activity), whether or not it generates productivity. Like payment rails or internet protocols (the latter of which are still free), AI becomes the layer through which economic and potentially state activity must pass, generating rents regardless of the value created.
- Consolidation: weaker players may fold, and incumbents can acquire distressed assets cheaply. The specialised nature of AI infrastructure creates an unusual dynamic – the sheer scale of GPU infrastructure currently being accumulated creates a stranded asset problem. While these chips have other uses (scientific computing, rendering, crypto-mining), but not currently near the volume deployed for inference or training. The tens/hundreds of billions channeling into GPU infrastructure could be too expensive to leave idle, but without comparable alternative demand at that scale, potentially forcing continued operation even at a loss.
- Environmental: AI’s massive energy demands, water consumption and dependence on climate-vulnerable supply chains, accelerate the conditions that could make its infrastructure unsustainable. The Copernicus Climate Change Service confirmed a 1.6% increase in global temperatures in 2024 – the window for limiting warming has effectively closed while AI’s energy demands are explosively growing. The International Energy Agency (IEA) projects that data centre power consumption could nearly double to approximately 945 TWh by 2030, exceeding Japan’s current national electricity consumption, with AI driving much of this growth (IEA, 2024). Google’s climate commitments have been derailed by AI expansion. The company reported a 48% increase in greenhouse gas emissions since 2019, despite pledging to reach net zero emissions by 2030. Google attributed this increase primarily to data centre energy consumption for AI workloads, acknowledging that its climate targets are now ‘extremely ambitious’ and may be unattainable (Google Environmental Report, 2024). Microsoft similarly reported a 30% increase in emissions since 2020, driven largely by infrastructure buildout (Microsoft, 2024). Taiwanese droughts threatening TSMC (Taiwan Semiconductor Manufacturing Company) – a 2021 drought forced the company to truck in water to maintain operations (Reuters, 2021) – TSMC’s chip fabrication requires 156,000 tons of water daily. Grid failures, water rights battles – all could impose the external discipline that markets and states seem reluctant to apply themselves. Macroeconomic shocks that will probably occur in the coming decades could force governments to reconsider sustaining AI infrastructure, with emerging priorities like managing climate adaptation, food security and mass displacement.
- Undercut: efficient architectures, photon computing and/or open source modelling could make current infrastructure investments obsolete. Efficient models (spike train neural networks, state-space models) could make the current infrastructure scaling oversized. Photon computing, if commercially viable, would strand electrical GPU infrastructure. Unlike AI winters through external funding withdrawal, this scenario involves AI succeeding through cheaper alternatives. Incumbent platform advantages in deployment, integration, and distribution somewhat undermine this possibility. Big Tech is well positioned to either prevent these alternatives from scaling through acquisition or capital advantages, or absorb them and reconsolidate – either way, incumbency could persist.
Reading today’s AI expansion through the analogy of enclosure presents different questions. If the incentives to expand rental opportunities succeed and AI is embedded in lots of forms of economic activity – no longer do we ask ‘when will the funding stop’ – but rather ‘who owns the infrastructure of activity, how do they extract value, and what political mechanisms sustain or discipline that extraction’.
In the next article, we turn to how enclosures have determined the kind of AI we have today – compute heavy, over parameterised, and costly.
Allen, R. C. (1982). The efficiency and distributional consequences of eighteenth century enclosures. The Economic Journal, 92(368), 937–953.
Allen, R. C. (1992). Enclosure and the yeoman: The agricultural development of the South Midlands, 1450–1850. Oxford University Press.
Alphabet IR. (2025). Q2 2025 Investor Report. Alphabet Inc.
Apple. (2025). Q3 2025 Financial Results. Apple Inc.
Amazon. (2025). Q2 2025 Earnings Report. Amazon.com, Inc.
Aziz, S., & Vallas, S. P. (2024). AI and critical infrastructure: Risk, regulation, and resilience. Technology & Society Review, 15(1), 55–78.
Boston Consulting Group. (2025). The Widening AI Value Gap: Build for the Future 2025. BCG.
Chorzempa, Martin. The Cashless Revolution: China’s Reinvention of Money and the End of America’s Domination of Finance and Technology. PublicAffairs, 2022.
Copernicus Climate Change Service. (2025). Global temperature breach: 2024 exceeds 1.5°C threshold. European Centre for Medium-Range Weather Forecasts.
Crevier, D. (1993). AI: The tumultuous history of the search for artificial intelligence. Basic Books.
David, P. A. (1990). The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox. American Economic Review, 80(2), 355–361.
Financial Times. (2025). OpenAI valuation approaches $300 billion in secondary market transactions. Financial Times
Financial Times. (2025a). How Nvidia’s Jensen Huang became Ai’s global salesman. Financial Times.
Google. (2024). 2024 Environmental Report. Google LLC.
Helmond, A. (2015). The platformization of the web: Making web data platform ready. Social Media + Society, 1(2), 1–11.
Hendler, J. (2008). Avoiding another AI winter. IEEE Intelligent Systems, 23(2), 2–4.
International Energy Agency (IEA). (2024). AI and Data Center Energy Demand Projections. IEA Publications.
Krizhevsky, A., Sutskever, I., & Hinton, G. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems (NeurIPS).
McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.
McCloskey, D. N. (1972). The enclosure of open fields: Preface to a study of its impact on the efficiency of English agriculture in the eighteenth century. The Journal of Economic History, 32(1), 15–35.
Meta. (2025). Q2 2025 Investor Update. Meta Platforms, Inc.
Microsoft. (2024). 2024 Sustainability Report. Microsoft Corporation.
Neeson, J. M. (1993). Commoners: Common right, enclosure and social change in England, 1700–1820. Cambridge University Press.
Perkins, T. (2025). Why the AI boom differs from previous tech bubbles. Financial Times.
Reuters. (2021). Taiwan’s TSMC faces water shortage challenges amid drought. April 2021.
Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.
Srnicek, N. (2016). Platform capitalism. Polity Press.
TechSpot. (2025). Nvidia commits $100 billion to OpenAI compute infrastructure. TechSpot.
Turner, M. E. (1986). English open fields and enclosures: Retardation or productivity improvements? The Journal of Economic History, 46(3), 669–692.
Varoufakis, Y. (2023). Technofeudalism: What killed capitalism. Bodley Head.
Whittaker, M. (2021). The steep cost of capture: How venture capital shapes AI. AI Now Institute.
Zuboff, S. (2019). The age of surveillance capitalism. Profile Books.