Human Resources Directors are attractive potential clients to companies selling workforce and AI and human resources solutions underpinned by large language models.
First HR is always under cost pressure as a non-profit generating infrastructure.
Second HR is often (though some might suggest ‘always’) subject to grumbles from the business that the function is somehow not delivering what is needed.
Third HR is accountable for rich reserves of workforce data and getting access to this brings considerable benefits to specialist AI vendors and enterprise platforms with embedded AI capabilities, such as those from Microsoft and Google.
And fourth HR is not the technology function! With limited tech domain knowledge HR could be more susceptible to sales pitches on alleged merits of various AI solutions.
Today AI is widely used in volume recruiting. Additionally workforce or people analytics functions are now deploying LLMs to analyse data from enterprise resourcing platforms and software applications, such as SAP, Oracle and Microsoft 365.
AI models are analysing these data sources to make predictions around workforce trends which can then be woven into strategic and operational HR decision-making.
Although the use of AI in other parts of the HR operating model is currently limited, there are burgeoning numbers of workforce AI solutions available. These include talent management tools; career and skills solutions; learning platforms; scalable coaching offerings; algorithmic performance management products and so on.
Using AI in pretty much all aspects of HR is likely to proliferate in the next few years. There are a few reasons why this could reasonably happen.
The hopes
One
HR leaders believe AI will save time. Saving time means saving staff costs. On the plus side it could make HR more efficient and productive. This might include upskilling the existing team onto higher value and complex work therefore helping people feel more committed and productive.
The downside might lead to reducing the number of HR workers because AI takes on existing tasks and in the near-term new roles do not emerge on the labour market.
Two
Introducing AI might finally solve the data problem. Many firms now have a ‘data lake’ but finding anything in the lake is nigh on impossible. LLMs are built to analyse data at a speed and scale which outstrips human capability. This means being able to make sense of disparate and messy data sources and potentially creating useful insights on current workforce practices and predicting future trends.
These qualitative predictions could improve strategic workforce planning which many in HR know is easy to talk about but hard to achieve.
Three
AI could help employees to have a better quality of experience when they interact with HR. There will be less reliance on HR operations which is constrained by time and capacity. Also the technology might provide more timely responses to queries which could minimise frictions in employee experience when it comes to solving standard HR queries.
But there are problems
Throughout the AI tech stack, from model makers (such as OpenAI, Mistral and Anthropic) to technology infrastructure firms (such as Microsoft and Google) and the third-party software applications designing solutions for HR, everyone is after high quality organisational data.
Given current large language models have been trained on all available ‘open source’ data, organisational data sources offer untapped and high-quality fuel to technology eco-system players. (Discussing the use of synthetic data for training is beyond the scope of this blog).
However such ‘data jewels’ are being signed over to AI vendors based on untested (and sometimes over-marketed) promises of the technology being a gateway to more efficient and effective HR functions.
There is a risk that HR is not intentional and explicit in clearly articulating the ethical trade-offs between what AI might bring versus potential impacts on workforce, culture and the relationship between employees and their organisation.
Familiarity with the ethical implications of using AI across the gamut of potential HR applications would go some way to managing these dilemmas.
A few things to consider
Workforce impacts
Whether AI is utilised for recruitment or to support workers in learning new skills, its plus points need to be assessed against the risks.
These include potentially displacing workers by making existing skills and roles obsolete. Also the possibility that employees feel less ownership and autonomy over the tasks they do because more aspects of a role are outsourced to technology. This might impact people’s sense that they are doing something worthwhile which could then erode commitment and engagement.
Values and culture impacts
AI has the potential to change current working norms that guide the relationship between individuals and employers.
For example new levels of work-behaviour analysis, such as the number of emails sent; the number of colleague connections in a typical day; the amount of multi-tasking in meetings; the number of internal roles someone is applying for, all give HR and managers many more insights into what workers do, when and how.
This fundamentally changes the worker-organisation relationship, even if data is aggregated so individual-level data is not identifiable.
This is not to say there is anything inherently wrong with doing this in a benign organisational environment, where analysing these data patterns will be managed appropriately to support workers and the business overall goals. But not all organisations operate are like this.
The trade-offs that HR needs to manage in considering AI and human resources are around ensuring that the benefits of AI do not inadvertently undermine collective efforts to create a purpose-driven, people-oriented and inclusive working environment because of perceived or actual ‘surveillance’ practices.
Scooping up routine work activity data, without educating employees on the intent and reason behind doing so, could impair efforts on trust, engagement and maintaining a values-based organisation.
Communications impacts
Talking about AI in the workplace, how it is used, why, the benefits and the risks is a process that requires consideration, time, commitment and nuance.
Naturally AI vendors want to sell technology bundled with great stories about its ‘potential’ which could lead some HR functions to only communicate the benefits to employees.
However HR should be honest about AI’s potential upsides and the impact of AI and human resources by being accurate about the technology’s potentially mundane benefits and balancing this with fair descriptions of its limitations. Not over-selling is important.
AI is far from infallible and there are limits to its reliability as the well-documented ‘hallucination’ problem of LLMs shows. (There is a lot of research taking place on limiting the impact of hallucinations but that is out of scope here).
Next steps
HR teams must educate themselves on AI ethics. This is a broad field and can be hard to operationalise.
Perhaps the best way to do this is to agree what are the top five trade-offs that matter to a HR function. Then funnel all potential AI builds or purchases through these weighted trade-offs.
It is equally important to educate and share with employees how AI is being used by HR, the likely impact from doing so and how AI ethics are being deployed in the organisation to manage and mitigate risks of harm.
Based on internal upskilling about AI use cases and the associated risks, HR teams should become more demanding of their technology vendors and partners.
Understanding the ethical guardrails these external companies have in place (when fine-tuning models and deployment feedback loops in organisational settings) will help bring clarity on potential risks. As will setting up a sandbox environment before any full roll out.
In an ideal world, HR should be part of internal corporate ethics and AI governance discussions. Obviously larger organisations will have more resources and expertise to put into these forums.
If they are not in place, then HR – as a business function at the frontline of significant AI utilisation – should convene such meetings.
There is enough information on the web from global organisations such as the OECD (as well as national governments) to get such a group up and running.
The key thing is being clear on the business benefits and the ethical trade-offs of AI and human resources.
Work out what the organisation and HR really cares about and when considering AI and human resources make sure employees are treated with honest and clear communication.
AI is changing HR. HR therefore needs to change in order to use AI well.
(Publishing in my personal capacity as a Founder and Consultant for EthicAI)
References
Albassam, W. A. (2023). The Power of Artificial Intelligence in Recruitment: An Analytical Review of Current AI-Based Recruitment Strategies. International Journal of Professional Business Review, 8(6), e02089. https://doi.org/10.26668/businessreview/2023.v8i6.2089
Corrêa, N. K., Galvão, C., Santos, J. W., Del Pino, C., Pinto, E. P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E., & De Oliveira, N. (2023). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns, 4(10), 100857. https://doi.org/10.1016/j.patter.2023.100857
The Economist (July 27 2024). AI firms will soon exhaust most of the internet’s data. Can they create more? AI firms will soon exhaust most of the internet’s data (economist.com)
Gélinas, D., Sadreddin, A., & Vahidov, R. (2022). Artificial intelligence in human resources management: A review and research agenda. Pacific Asia Journal of the Association for Information Systems, 14(6), 1–42. https://doi.org/10.17705/1pais.14601
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., Rocktäschel, T., Riedel, S., & Kiela, D. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Advances in Neural Information Processing Systems, 33, 9459–9474. https://proceedings.neurips.cc/paper/2020/hash/6b493230205f780e1bc26945df7481e5-Abstract.html
Martela, F., Gómez, M., Unanue, W., Araya, S., Bravo, D., & Espejo, A. (2021). What makes work meaningful? Longitudinal evidence for the importance of autonomy and beneficence for meaningful work. Journal of Vocational Behavior, 131, 103631. https://doi.org/10.1016/j.jvb.2021.103631
Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial Intelligence in Human Resources Management: Challenges and a Path Forward. California Management Review, 61(4), 15–42. https://doi.org/10.1177/0008125619867910