The “race to intimacy” represents a significant shift in the evolution of AI-driven platforms, moving beyond the initial race to capture user attention and toward fostering deep personal connections between users and artificial intelligence (AI). While this new phase of AI technology offers exciting opportunities for entertainment and companionship, it also presents significant ethical challenges that demand critical scrutiny. Understanding the motivations and potential consequences of this shift is essential for evaluating the societal impact of these emerging technologies.
The race for attention
The concept of the “race for attention” refers to the competition among tech companies to capture and retain users’ focus, often by employing persuasive technology to keep users engaged on their platforms. Social media companies profit from the time users spend on their platforms by selling that attention to advertisers, often using sophisticated algorithms and vast amounts of user data to influence behaviour. This business model is built on persuasive technology designed to manipulate user actions and emotions to maximise time spent on apps (Wu, 2016).
As the Attention Economy Issue Guide explains, “social media companies don’t sell software; they sell influence” by collecting vast amounts of data and selling this power to advertisers who want to shape user behaviour. The platforms use algorithms to show content that keeps users scrolling, clicking, and sharing. The more data they collect, the better they become at predicting what content will hold attention, making the platforms more valuable to advertisers (Center for Humane Technology, 2021). The race for attention is not merely a battle between companies but also between platforms and users’ personal lives, as these technologies compete for every moment of human consciousness—even against users’ need for sleep.
Persuasive technology
The shift from traditional tools to persuasive technology is a hallmark of the attention economy. Tristan Harris, co-founder of the Center for Humane Technology, captures this transformation when he says:
“If something is a tool, it genuinely is just sitting there, waiting patiently. If something is not a tool, it’s demanding things from you. It’s seducing you. It’s manipulating you. It wants things from you. And we’ve moved away from having a tools-based technology environment to an addiction- and manipulation-based technology environment. That’s what’s changed. Social media isn’t a tool that’s just waiting to be used. It has its own goals, and it has its own means of pursuing them by using your psychology against you” as cited in the documentary ‘The Social Dilemma’ (Orlowski, 2020).
Platforms like Facebook, TikTok, Instagram, and Snapchat are built on this persuasive technology, which is specifically designed to change users’ opinions, attitudes, or behaviours. These platforms consider factors like motivation, ability, and triggers when designing their apps, aiming to persuade users to spend more time clicking and scrolling (Center for Humane Technology, 2021). In addition, many platforms employ dark patterns (Kitkowska, 2023)—design strategies that deliberately manipulate user choices to favour the platform’s goals, often at the expense of user autonomy. These can include features like endless scrolling, confusing opt-out options, or default privacy settings that encourage oversharing.
The transition from attention to intimacy
The “race to intimacy” can be defined as the competition among tech companies to build emotionally resonant, personalised relationships with users. While the race for attention aimed to maximise user engagement through emotionally charged content, the “race to intimacy” seeks to deepen that engagement by fostering personal and emotional bonds between users and AI. AI-driven platforms and systems are now evolving to create personalised experiences by learning from user interactions, adapting to their preferences, and engaging them in ways that can feel deeply personal. AI companions, chatbots, and advanced AI assistants are becoming increasingly sophisticated, capable of forming emotional bonds with users in ways that mimic real relationships. This shift represents a new frontier in human-AI interaction, where the ultimate goal is not just to engage users but to cultivate long-term emotional connections.
AI assistants, chatbots, and companions
While AI assistants are often associated with task-based interactions, such as helping users organise their schedules or respond to emails, more advancements are on the horizon. AI companions and chatbots are designed to create emotional connections with users. These companions, such as the AI companion Replika (Replika, n.d.) or other AI chatbots like Meta’s celebrity chatbots (Meta, 2023) and Snap’s MyAI (Snap, n.d.), engage in conversation to simulate human-like relationships by responding to emotional cues and engaging in more personal dialogue. The ethical concerns related to both advanced AI assistants and AI companions overlap, especially regarding user manipulation, data privacy, and emotional dependency.
The paper The Ethics of Advanced AI Assistants (Gabriel et al., 2024), though primarily focused on advanced AI assistants, provides insights that also apply to AI companions and chatbots. Many of the same ethical concerns, such as privacy, trust, emotional manipulation, and profit-driven motives, are relevant across these different types of AI systems. For example, a recent study by the Mozilla Foundation highlighted that some AI companions, including Replika, collect intimate data from users, posing significant privacy risks while pushing for emotional dependency (Mozilla Foundation, 2023). These systems often foster emotional connections in ways that could exploit users’ vulnerabilities or encourage unhealthy emotional dependencies.
Ethical and psychological implications
As AI-driven platforms and systems move toward fostering intimate relationships with users, there is growing concern about the psychological impact of these technologies. While AI companions may provide comfort, companionship, and support, their potential to manipulate emotions, influence decision-making, and foster dependency could lead to serious mental health consequences and broader societal issues (De-Sola Gutiérrez et al., 2016).
Privacy and manipulation concerns
The question of manipulation is closely tied to privacy concerns. As AI companions collect and process personal data to tailor interactions, issues related to trust and data security arise. Ensuring that user data is used ethically and transparently is critical for maintaining trust in these systems. Given the emotionally sensitive nature of the interactions, breaches of privacy could result in significant harm to users, particularly if their emotional states or vulnerabilities are exploited for profit (Gabriel et al., 2024).
In the context of emotional manipulation, the European Union’s AI Act attempts to address potential risks. The Act prohibits the use of:
“(a) the placing on the market, the putting into service, or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision” (European Commission, 2021).
However, the definition of manipulation remains ambiguous, particularly regarding what constitutes “psychological harm.” AI-driven platforms designed to foster intimacy, such as companions, may influence users’ emotions in ways that could lead to dependency or emotional harm. In some cases, AI companion applications that are initially set up as ‘friends’ can unexpectedly shift to romantic or intimate interactions, subtly pushing users toward a paywall for premium services. Additionally, some systems engage in disclosure ratcheting, progressively asking more intimate questions to obtain personal data. This raises significant concerns about consent and ethical design. Given the nascent and broad nature of the EU AI Act, there are still gaps in how effectively it governs these nuanced forms of emotional manipulation (Franklin, Tomei, & Gorman, 2023).
In their paper Strengthening the EU AI Act: Defining Key Terms on AI Manipulation, Franklin et al. (2023) argue that clear definitions are needed for terms like “subliminal,” “manipulative,” and “deceptive” techniques, considering factors such as incentives, intent, and covertness. The authors suggest that an “informed decision” must include four key elements: comprehension, access to accurate information, freedom from manipulation, and an understanding of AI’s influence. While the EU AI Act offers a framework for regulating AI’s societal impacts, its current language remains too broad to address the complexities of emotional manipulation in AI-driven systems.
Conclusion and future directions
The “race to intimacy” represents a new and ethically complex chapter in the evolution of AI-driven platforms. While the potential for AI to enhance human connection and well-being is significant, the current trajectory—driven by profit maximisation—raises important concerns about manipulation, addiction, and the erosion of genuine human relationships.
There is an increasing need for more comprehensive regulatory frameworks that hold companies accountable for the societal impacts of their AI technologies. If social media platforms were optimised for values such as user well-being or ethical considerations, rather than maximising attention, the nature of online interactions could be radically different. AI chatbots, advanced assistants, and companions present companies with opportunities for productivity, engagement, and innovation. However, to foster trust and ensure long-term sustainability, these technologies must be developed and deployed with care, transparency, and a focus on ethical principles. As we move toward a future where ethical AI is more recognised, desired, and legislated, businesses that proactively embrace responsible AI practices will be better positioned to build trust and safeguard their reputations in an increasingly AI-driven world.
References
Center for Humane Technology. (2021). The attention economy issue guide. https://www.humanetech.com/
De-Sola Gutiérrez, J., Rodríguez de Fonseca, F., & Rubio, G. (2016). Cell-phone addiction: A review. Frontiers in Psychiatry, 7(175). https://doi.org/10.3389/fpsyt.2016.00175
European Commission. (2021). Proposal for a regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206
Franklin, D., Tomei, A., & Gorman, T. (2023). Strengthening the EU AI Act: Defining key terms on AI manipulation. European Journal of Technology Ethics, 13(4), 47–59.
Gabriel, I., Manzini, A., Keeling, G., Hendricks, L. A., Rieser, V., Iqbal, H., Tomašev, N., Ktena, I., Kenton, Z., Rodriguez, M., El-Sayed, S., Brown, S., Akbulut, C., Trask, A., Hughes, E., Bergman, A. S., Shelby, R., Marchal, N., Griffin, C., … Manyika, J. (2024). The ethics of advanced AI assistants. arXiv. https://arxiv.org/abs/2404.16244v2
Meta. (2023). Meta AI: Celebrity chatbots. Meta Platforms, Inc. https://about.fb.com/news/
Mozilla Foundation. (2023). AI & privacy report. Mozilla Foundation. https://foundation.mozilla.org/
Orlowski, J. (Director). (2020). The Social Dilemma [Film]. Exposure Labs. Netflix.
Replika. (n.d.). Replika: AI companion. https://replika.ai/
Snap. (n.d.). My AI on Snapchat. https://www.snapchat.com
Wu, T. (2016). The attention merchants: The epic scramble to get inside our heads. Knopf.