How to beat the machine

How to beat the machine

Click here to view original web page at

Last spring, Lizzie Carlyle* found herself staring into a screen in a state of rising despair. With 20 years’ experience in her field, she had applied for a mid-senior-level role she was confident she could do, and after filling out the application, she had been invited to interview.

But the interview was not to be with the hiring manager, it was with an algorithm: a five-minute video assessment known as an “asynchronous interview”, which involves sitting in front of a computer and answering questions as they appear on the screen.

“I was a bit taken aback, to be honest – like, really? No one’s going to call me and have a chat? But I was in a redundancy position and not in a position to be too choosy, so I thought, I’ve just got to do it.” During the interview, which was done using artificial intelligence software from a company called Modern Hire, four competency-based questions appeared one by one on the screen, with roughly a minute to record each answer.

It was, she recalls, “absolutely hideous. It was obviously very depersonalised, and felt robotic and inauthentic. After you answer each question you get the option to retake the video – and of course, watching yourself is awful anyway, so I kept retaking them. It ended up taking a lot longer than five minutes.

“Later, I got called back for a real interview, and got the job. But I did feed back to my line manager that the video interview process was awful. He laughed and said, ‘Oh yes, your video responses were shocking.’ I said, ‘Well why did you hire me then?’ And he said mine was the best of a bad bunch. I really challenged him on it: why do video interviews at all then, if they’re not a true representation of someone’s capability? He said, ‘I don’t know, it takes time away from me, doesn’t it?’”

The irony is that Lizzie works in talent acquisition: she was already familiar with this new type of technology, having seen it used to whittle down the thousands of applicants to the graduate scheme at a major corporation where she previously worked.

“I think it has its place, but it’s not the expectation for that level of role – and it was a complete waste of time,” she says. “Interviews should be a meeting of minds, a two-way process that’s as much about the company telling you about them as what you can give them, especially in this candidate-driven market.”

The new normal

To those applying for jobs now, AI recruitment software is far more common than not, and you will almost certainly encounter it in one form or another. Virtually every Fortune 500 company, for instance, uses an applicant tracking system (ATS), a blanket term for software that allows companies to automate the application process, such as scanning CVs and cover letters for keywords. (One hack I heard of to beat the ATS involves hiding hundreds of keywords in microscopic transparent font at the bottom of a CV.)

Meanwhile, LinkedIn is enabling recruiters to screen in ever more automated ways: one friend was asked to complete two skills tests on the site before she could submit her application for an operations director role. “The tests alone took 45 minutes, on top of the long-form questions, and there wasn’t a callback, or even a ‘thanks but no thanks’. It was really poor form, I thought.”

The wild west of the sector, however, is the video interview, and the myriad ways in which software companies are claiming AI can do much of the assessing, saving time and resources for recruiters. At one end, it may be straightforward language recognition technology, ranking candidates’ videos according to whether they mention all the skills and competencies in the job description.

But at the other extreme, some companies claim they can analyse and describe a person’s personality from a short video, assessing facial movements, tone of voice, body language and more – so you might be scored on how much you smile and make eye contact, for instance.

It’s this type of assessment that was criticised in a recent Cambridge University study, which replicated a version of the AI used by a German company and found it to be biased in some truly bizarre ways. Those wearing glasses, for instance, were judged to be less conscientious, while sitting in front of bookshelves or with art on your wall counted in your favour.

“We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers,” said the co-author of the study Dr Eleanor Drage, describing the phenomenon as “technosolutionism”. “While companies may not be acting in bad faith, there is little accountability for how these products are built or tested.”

It may sound alarmist, but there are worrying examples of this kind of AI-based technology going badly wrong. In 2018, Reuters broke the story that Amazon had abandoned a years-long experimental computer program designed to mechanise job searches after it was found to be sexist. The reason? Bad data: the computer models were trained to vet applicants based on CVs that had been submitted for technical jobs, which were mainly from men – effectively teaching itself that male candidates were preferable.

Human bias

All of these companies, however, market themselves not just as time-saving automation – especially useful for the type of roles that receive hundreds of applications – but as a way to eliminate human bias in recruitment, from sexism to racism to ageism. There is even such a thing as “lookism”: physically attractive candidates have repeatedly been shown in studies to be more likely to be interviewed for jobs and hired, be promoted more and earn higher wages. Some of this bias may be unconscious, but not always.

“I used to work in one organisation where the recruiter told us that she’d helped the teams out because she’d taken out all of the names that were hard to pronounce from the pile of CVs,” says workplace culture expert Bruce Daisley, author of the recent book Fortitude, who was previously a VP at Twitter.

One of the biggest players in the AI video recruitment field is HireVue, which counts among its clients the Co-operative Bank, G4S, Vodafone and Unilever. According to their chief scientific officer, Lindsey Zuloaga, a huge focus is on training the algorithm to be less biased than a human would be. For instance, while the algorithm is regularly tested on new cohorts to check for bias, it doesn’t continually learn using raw data, “where anything could happen – it could be judging your glasses or your background”. They also use an industry-standard “four-fifths” rule, which raises a red flag if any demographic group scores less than four-fifths of the points of the top-scoring group.

“The major companies in our space have been brought up to a pretty good standard,” insists Zuloaga, but she has seen some companies operating internationally that “seem sketchy. If you’re not careful, just like with any powerful tool, it can be used for good or cause harm. And there are some obvious ethics, like that you should tell someone AI is going to be used on them and give them the option to understand more about that if they’d like to, or if they have a disability and think it’s not going to work well on them, they need to be able to opt out of being evaluated in that way.”

HireVue describes itself as a pioneer of video and asynchronous video interviewing, but in the past couple of years it stopped using visual analysis of any sort. “When we did use it, all it did was quantify muscle movements in the face, looking at what people are expressing, but there’s a lot of controversy to that,” says Zuloaga.

“For certain roles that’s obviously important – if you smile or look disgusted, that matters in a flight attendant role for example. But for all our history, words have been the most predictive thing, and have had the most value. And as the natural language processing technology has got better and better, we saw that the nonverbal stuff was just not offering enough value to deal with the concerns that it was causing. People were worried that we were looking at skin colour, or thinking, ‘If my eyebrow moves in a weird way, am I not going to get the job?’ That was never the case, but it is a lot easier to talk about language and make people feel comfortable with that.”

(By chance, one of the case studies I spoke to did a HireVue interview for a market manager role at a major travel company, and rated the experience highly. “It was weird, but I actually enjoyed it,” he said, adding that he fully appreciated why some form of automation could be necessary. “There were 79 applicants for that one position. By the time you get to the last one, you’re going to forget the first one.”)

Orwellian future?

Even so, many are alarmed at the rise of AI recruitment. A 2021 study by Harvard Business School coined the term “hidden workers”, expressing concern about those applicants who consistently fail due to hiring processes that, for example, eliminate anyone with a gap in full-time employment. And there are those for whom a pre-recorded video interview is difficult in itself, whether that’s because they don’t have a quiet, private space to record it, they don’t have high quality internet access or the technical skills, or simply that they find the concept cripplingly intimidating.

“I know from working in social media companies the way that new technology is embraced is with wide eyes and optimism and hope,” says Daisley. “But when some of these things get going, they’re a runaway train – where’s the evidence for it? Where’s the veracity? I think these organisations are trying to project certainty and clear signals into a world that’s full of doubt, ambiguity and uncertainty. We’re all familiar with going to a job interview, feeling it didn’t go well and getting a fairly opaque rejection, but when it’s done via machine I think it’s a little bit more sinister.

“The only reason why the modern workforce might not describe it as Orwellian is maybe because they’re not as familiar with the work of George Orwell as previous generations,” he continues. “It’s such an intimidating prospect: you record a three-minute interview into a screen, and the computer says no based on what you’ve recorded. What a Black Mirror vision of the future.”

* Not her real name

How to beat the machine

Silicon Valley-based career coach Bill Benoist offers four tips and tricks for AI interviews

  • Set the scene: “Sit approximately 18 inches away from the camera – to ensure your upper body is visible as well as your face – and declutter the view behind you.”
  • Reach for the STARs: “I always tell my clients to think of three or four significant events from their recent career, and then create “STAR stories” – this stands for a situation, your test or task, the action you took and the result. Write down bullet points for each. One STAR story can answer multiple types of questions.”
  • If the AI is assessing visually, act natural: “Studies show 55 per cent of communication is non-verbal, which shows how important body language is. Be sure to smile. And showing the palms of your hands, for instance, shows that you’re trustworthy.”
  • Hit the keywords: “Make sure you use the exact words on the job description for the skills they’re looking for: the machine is ranking you, not a human being who might recognise similar words for the same thing.”

Sign up to the Front Page newsletter for free: Your essential guide to the day's agenda from The Telegraph - direct to your inbox seven days a week.