Menu

Moving fast only breaks things

by | Sep 6, 2024 | AI Governance

Artificial intelligence (AI) is often heralded as a promise to transform industries and create new opportunities. With this promise comes a rush to integrate AI, driven by a fear of falling behind competitors or missing out on investor interest. Many organisations fall into this pressure trap and start pursuing AI for its own sake without a clear understanding of whether their AI strategy serves their overall goals. Ultimately, this can lead to AI creating more costs than value.

Falling into the AI pressure trap

The narrative around AI, both in corporate boardrooms and government halls, is one of urgency: ‘If we do not develop it, someone else will.’ Companies face pressure from investors eager to fund AI-related initiatives and directors demanding to know what their companies are doing with AI. The reality is that incorporating AI is not a plug-and-play process, and rushing to adopt AI without proper consideration can lead to a range of problems that undermine an organisation’s stability and reputation. Consider, for example, the following costs associated with integrating AI.

Financial investments: Developing and implementing AI systems is a significant financial investment. From purchasing data and computing power to hiring AI experts and training employees, the costs may quickly add up. Moreover, cutting corners when integrating AI can create technical debt as organisations may invest in technology that does not fit their digital infrastructure, or is difficult to scale or maintain, which ultimately increases costs over time.  Organisations must thus ask themselves if the expected return on investment justifies the expense to prevent creating financial strain rather than value.

Time and resources: Developing AI can be time-consuming, for example, considering the time spent on data collection and cleaning or training and testing the model. After a model has been deployed, ongoing maintenance, tuning, and updating are required to keep AI systems functioning optimally. Not only that, but now that the new EU AI Act is in place, significant time and effort have to be directed towards regulatory compliance (and rightfully so).

Operational and reputational risk: AI carries inherent operational and reputational risks, especially if essential steps like testing, training, and ethical evaluation are bypassed or hastened. The consequences can be severe if an AI system underperforms or behaves unpredictably—such as displaying bias or generating incorrect outputs. For example, a biased algorithm in hiring or lending could lead to discrimination claims, lawsuits, and public backlash. This would not only incur direct costs in terms of legal fees and settlements but also damage the organisation’s reputation—let alone the impact it may have on those discriminated against by the model.

Environmental costs: The environmental impact of AI is often overlooked. Training large AI models requires substantial computational power, which consumes significant amounts of energy. The carbon footprint of AI operations can be substantial, especially if the data centres are not powered by renewable energy sources. Organisations need to consider whether their AI strategy and initiatives align with their sustainability goals. If environmental stewardship is a core value, then the energy-intensive nature of AI could be at odds with this principle.

Impact on staff and organisational culture: AI can automate tasks and make processes more efficient, but it can also lead to job displacement, reduced morale as work may become more repetitive and less meaningful, or even resistance from employees who feel threatened by the technology. Additionally, the EU AI Act requires organisations to train their employees to become AI literate, thus allocating human resources to training, which could also have been spent on other projects.

Societal costs: AI technologies can have far-reaching impacts on societal well-being, resulting from issues around privacy, surveillance, discrimination, loss of agency, and democracy. The cost of violating societal norms can be substantial and reach far beyond the scope of the organisation. As such, organisations should feel urged to reflect on whether their AI practices contribute to larger societal harms.

Start by asking why

AI can bring significant benefits, but only if done thoughtfully and strategically. Starting with asking ‘why’ helps to avoid the pitfalls of implementing AI for its own sake and ensures that any AI initiative truly serves the intended objectives. Ultimately, doing so will save time, money, and effort while minimising legal and ethical risks. A proper AI strategy, rooted in the organisation’s goals and mindfulness of AI’s broader implications, creates the necessary conditions for making AI more valuable than costly.