Premium access: CIPD Festival of Work 2024
People Management
AI: how HR can ask (and answer) the right questions
Jobseekers have been using artificial intelligence to turn their holiday snaps into professional photos. But that’s just the tip of the iceberg for the technology – which is why the people profession needs to understand the potential, and the limitations, of what’s coming next
by Chris Stokel-Walker
Such a huge revolution requires a nimble response from leaders and their HR teams to capture lightning in a bottle – and to ensure that, when and if AI is introduced to operations, it is done in a proportionate, consensual and realistic way.
The current moment is similar to the one in the 1990s when communication first went digital in a widespread way, says Clare Walsh, director of education at the Institute of Analytics. In LinkedIn survey data, 61 per cent of HR professionals globally say their organisation is currently rolling out AI training to support employees, and 80 per cent of HR practitioners think AI will be a crucial tool to help them in their jobs. “HR will have a crucial role in this process,” says Walsh, “because the whole workforce has to become data literate in the 2020s.”
Part of the reason for this is quantum computing, which will take the current basic principles of AI and turbocharge them. Quantum deals not in bits, but in qubits, which contain far more information because they capture it in a different way. The first 64-qubit computer has already been built and will be commercially available within a few years. It has enough processing power to store and crunch through the entire content of the internet as it currently stands in mere seconds.
CIPD data suggests 29 per cent of UK business leaders have introduced intelligent automation so far. Separate polling has found that half of the general population has tried its hand at generative AI. That suggests business is on the cusp of widespread adoption – which means it is vital HR professionals understand the technology, its potential and its limitations. And what better time to start than now?
How AI works
To understand how to best use AI in the workplace, it’s important to first understand how AI works. “The key that is really helpful for unpacking the tech, and which is a little bit overlooked at times, is that AI is still about prediction,” says Noah Giansiracusa, associate professor of mathematics and data science at Bentley University. It works using machine learning: feeding a computer a vast volume of data and encouraging it to use neural networks – or computer simulations of how our brain works – to discern patterns. In the 2000s, it was used to analyse flu data to predict how and where it would spread, or bank data to understand who was likely (or not) to pay back loans.
AI has been around since the 1950s, and has gone through successive cycles of boom and bust as scientists tried to mimic the operation of the human brain through contemporary computer hardware. Limitations in technology often scuppered development. But the work was kickstarted into a new era in the late 2010s with advancements in hardware, and clever technical workarounds by academics who began to better programme neural networks.
This move from task-specific AI to generalisable AI – where a neural network looks at what it has been given before and tries to predict the most possible likely outcome – has helped provide impetus to the current revolution. Which is how we got to the present day. Generative AI is slightly different, but still fundamentally the same underlying tech. “It’s still using data to make predictions,” says Giansiracusa. It’s just that instead of where flu will be next, or who’s lax about loan repayments, it’s guessing the next logical word in a sentence.
“ChatGPT has a lot of data on textbooks, tweets and things on the internet,” explains Giansiracusa. “It scans through all this text, one word at a time, based on the interaction, then tries to predict what the next word is going to be.” Then it generates what it thinks is the answer. So the large language model (LLM) behind ChatGPT is more likely to produce the word ‘you’ or ‘bacon’ when given a partial sentence reading ‘I love’ rather than ‘taxes’ or ‘murder’.
It’s a guessing game – one backed up by huge volumes of information, admittedly. Which is why it’s worth bearing in mind the drawbacks of AI, as well as its benefits for the human resources function.
What does AI mean for HR?
For Uma Gunasilan, associate dean of research at Hult International Business School, who has been studying the potential use of AI across business, there’s an opportunity for it to disrupt and supercharge the HR function. “The spotlight is on redefining roles,” she says, imagining a future where “HR professionals become AI conductors, guiding data streams for insights”.
Such changes will mean an enhancement of HR’s skills, rather than a replacement. “This doesn’t mean HR’s exit; it means a pivot,” she says. She believes that people professionals should step into advisory roles, focusing on empathy, employee development and strategic workforce planning, leaving AI to crunch data.
But bear in mind that key tenet: AI is only as good as the data it’s trained on. “With new AI and LLM technologies gaining traction in every enterprise, it is essential for everyone to understand how to uncover valuable insights from these tools,” says Libby Duane Adams, chief advocacy officer and co-founder of Alteryx, a global software company. “They must be able to ask the right questions, implement the right data techniques to use and yield helpful outcomes.”
What are the potential pitfalls?
Some companies have adopted the use of AI in recruitment and employee wellbeing. Global consulting giant Accenture uses an AI-powered tool to mitigate unconscious bias in its job descriptions and interview process, while Adidas uses it to try and forecast what job roles will be needed in the near future. “Various applications of generative AI would be around potentially helping to draft job descriptions, helping to draft employee emails, suggesting questions for interviews and performance objectives,” says Hayfa Mohdzaini, senior research adviser for data, technology and AI at the CIPD.
However, for every success it’s worth bearing in mind the risks. Training AI on the right data to ensure it isn’t biased is important. Academic and industry studies show the inherent biases in AI models. Ask AI image generation tools to produce shots of CEOs, for instance, and they’ll more likely than not show you a middle-aged white man. But even when you think you have control over the inputted data, the results can still be superficial. When People Management’s guinea pigs (pictured above) ran 10 personal photos of themselves through AI software Try it on, some were given hair transplants, a facial nip and tuck and a brand new wardrobe in what the software deemed were ‘business shots’.
For that reason, even AI development company Hugging Face isn’t using AI in most of its HR functions, says Emily Witko, the firm’s head of people, culture and diversity, equity, inclusion and belonging. After the company raised funding, it had 2,400 new job applications flood into the system. “Someone on LinkedIn was like: ‘Use our AI résumé reading tool that will actually help you go through these a little bit faster,’” Witko says. “Those are the tools that I feel are probably still the most problematic at this point.”
People leaders ought to be aware of the risks of bias and how they can be amplified by the use of AI, and only use the technology if they’re fully confident that it will work. One example of a successful integration of generative AI can be seen at language learning app Ling, which this summer wanted to help employees get better, 24/7 access to HR support should they need it. It uploaded a series of documents about HR processes to train ChatGPT on the company’s policies. “We then asked ChatGPT to create a chatbot that could answer questions sort of like an FAQ page of questions employees commonly ask,” says HR manager Jarir Mallah.
“It worked like a charm, but we had to consider privacy issues. It’s still very unclear what happens to the information uploaded into any AI interface, so be sure never to share proprietary or personal data.”
The company is continuing with the trial because of the time saving it offers its HR team – and the benefits for staff. “Only by combining high-quality data, diverse human intellect and business context,” says Duane Adams, “can AI enable HR organisations to better understand the ‘why’ behind the ‘what’ to make well-informed decisions proactively addressing factors influencing employee turnover and developing targeted retention strategies.”
What does AI mean for staff?
Any business announcing that AI will be introduced across its workplace will likely see two responses: excitement and dread. Excitement because AI promises to make many people’s lives easier. Many mundane tasks, such as sending out stock emails or analysing vast reams of data and producing reports about it, can be automated in an instant. Dread because the technology’s prowess means that two thirds of occupations could be at least partially automated by AI, according to one analysis.
Assuaging employees’ fears about the adoption of AI and what it means for them is important – and any rollout of the technology ought to be communicated proactively, ahead of time, to reduce anxiety. “Address these concerns head on by emphasising that AI is a tool to augment their capabilities, not replace them,” says Paul Wood, founder of C-PAID, a Manchester-based business that has considered adopting AI.
Katie Obi, chief people and transformation officer at talent lifecycle management company Beamery, says some of those time-saving tools include AI-powered “co-pilots”. “They help make people more efficient at writing emails, crafting content and answering questions using chatbots,” she explains. Beamery has adopted generative AI elements in its talent recruitment and data management systems. “We run hackathons to work out the best ways to incorporate AI into our product.”
For Mallah, being involved in the company-wide discussions about using AI is imperative. “To remove the human aspect of work or make it feel like it has been made inaccessible will have negative effects on the wellbeing of your workforce,” he says. Mallah involved employees proactively in the conversation. “AI ideally shouldn’t be implemented without consideration from staff,” he says. Testing it in small groups before adopting it across the company is also a way to avoid headaches if it turns out not to be a good fit with the way your business works.
For Kate Redshaw, head of practice development for the employment team at Burges Salmon, there’s a need to ask questions about the knock-on effects of rolling out AI technology throughout your workplace. “Are roles changing as time-saving tools are introduced? Is autonomy being sacrificed as an algorithm instructs your team member on what to do? Are projects being allocated by machine rather than by line manager?” she asks. “None of this is to suggest you should reject the use of AI, but is there a need to pause for thought to consider how you are preparing your people, psychologically as well as technically, for these changes as you might with other types of change programme?”
How to develop a company-wide approach
Beyond what it means for individuals, it’s important that HR teams consider what it means more broadly. Should staff be allowed to use AI? Should they be encouraged to do so? What are the risks and what are the rewards?
“Businesses should have an AI policy, which dovetails with other relevant policies, such as those relating to data protection, IP and IT procurement,” says Ben Travers, partner at Knights, who specialises in AI, IP and data protection issues. “The policy should set out the rules on which employees can or cannot engage with AI. Businesses need to decide how they are going to address these risks, reflect these in relevant policies and communicate these policies to their teams.”
Staff need to be clearly informed about what is and isn’t acceptable to insert into generative AI tools to avoid issues such as those Samsung faced in the early days of ChatGPT, when proprietary information was fed into the LLM powering it, potentially being used to train the model. Samsung has since blocked the use of public generative AI tools like ChatGPT, alongside other companies.
The risks of losing control of data are what keep Witko away too – as well as the potential that AI misfires. But despite her scepticism, the company still responds ‘yes’ when asked whether organisations should adopt AI. “I get excited by new technology,” says Witko. “There are always challenges. And it’s just about navigating them.”
Editor Eleanor Whitehouse
Deputy editor Abbie Dawson
News editor Juliette Rowsell
Art director (multimedia) Aubrey Smith
Production editor Joanna Matthews
Staff writer Mahalia Mayne
Junior staff writer Isabel Jackson
Freelance art editor Kayleigh Pavelin
Image credits
Try it on