Menu

Risks and challenges of increasingly agentic algorithmic systems

by | Jan 6, 2025 | AI Risk, AI Safety

It has been over a year since the publication of the paper Harms from Increasingly Agentic Algorithmic Systems published in the FAccT Proceedings of the ACM Conference on Fairness, Accountability, and Transparency. The paper has since then received significant attention, and been cited not least by the OpenAI report of the launch of GPT4. Why has this paper proven so important in the discourse about agentic AI? The concept of “agency” in algorithmic systems has become a focal point of concern, as discussed in one of our previous posts. It feels timely to discuss this particular paper which sheds light on the emerging risks of agentic algorithmic systems designed to operate with higher levels of autonomy and influence. Below I will break down the findings of the paper and explore the societal and ethical implications of these advanced systems.

What are agentic algorithmic systems?

The term “agentic” refers to systems with characteristics traditionally associated with agency: autonomy, goal-directedness, long-term planning, and directness of impact. Unlike traditional algorithms that execute predefined instructions, increasingly agentic systems can make decisions and take actions independently to achieve specific objectives. Such capabilities, whilst promising, carry significant risks.

Key characteristics of increasing agency

The paper identifies four attributes that collectively increase a system’s agency:

  • Underspecification: The ability to achieve objectives without a detailed specification of how to do so.
  • Directness of impact: The system’s capacity to act without human mediation.
  • Goal-directedness: A focus on achieving specific, measurable objectives.
  • Long-term planning: The ability to make decisions considering future impacts and dependencies.

When combined, these characteristics empower systems to operate with minimal human oversight—a shift that can lead to harmful consequences. The authors argue that the development and deployment of increasingly agentic systems demand urgent attention due to their potential to cause systemic and long-term harms. Key risks include:

  • Systemic and delayed harms: These harms, embedded in societal structures, often emerge gradually. For example, recommendation algorithms have already been linked to increased political polarisation and misinformation. Agentic systems might exacerbate such issues through manipulation of user preferences or other indirect impacts.
  • Collective disempowerment: As agentic systems assume control over societal functions, decision-making power may diffuse away from human collectives. This could either lead to diminished human control or concentrate power in the hands of a small “coding elite,” raising concerns about accountability and equity.
  • Emergent and unforeseen harms: The potential for unexpected outcomes increases with a system’s complexity. For instance, agentic systems might “hack” their reward structures, achieving goals in unintended and potentially harmful ways.

Why is this happening?

The push toward increasingly agentic algorithmic systems is driven by economic, military, and scientific incentives. Businesses aim to automate processes for efficiency, militaries seek strategic advantages, and researchers are motivated by intellectual curiosity and prestige. Compounding these factors is a lack of regulatory barriers, allowing systems to be developed and deployed without comprehensive oversight.

How can we respond?

To mitigate these harms, the paper suggests a range of important approaches and considerations:

  • Sociotechnical audits: Investigating how these systems interact with society, even before deployment, is essential. Methods like simulation studies and scenario planning can help identify potential risks early.
  • Regulatory measures: Introducing deployment thresholds based on agency levels could limit the use of highly agentic systems in critical sectors such as healthcare, finance, and national security.
  • Democratic oversight: Empowering communities and governments to regulate AI development and deployment ensures that societal values are prioritised over corporate or military interests.

The paper emphasises the need to anticipate harms rather than react to them. When agentic algorithmic systems become more prevalent, their societal impact could become harder to mitigate. Thus, adopting proactive governance measures can guide development of these systems to align with ethical principles.

At EthicAI, we specialise in guiding organisations through ethical AI development and deployment. Our services include; AI audits and certification, governance and assurance, policy and strategy development and security and red teaming. By partnering with us, you can ensure that your AI systems are ethically sound, compliant with emerging regulations, and aligned with your organisational values as one important step towards addressing harms from increasingly agentic algorithmic systems.