The machine will see you now

Mike Phipps reviews The Algorithm: How AI Can Hijack Your Career and Steal Your Future, by Hilke Schellmann, published by Hurst.

The news is full of stories about the potential dangers of Artificial Intelligence. But some of the nastiest aspects of the technology are already here. Workers are increasingly being hired and fired by AI – including on the basis of how AI rates the level of engagement of their facial features.

Ninety-nine percent of Fortune 500 companies use algorithms and artificial intelligence for hiring. Résumé screening includes AI games, one-way video interviews and AI tools that conduct background checks and can scan candidates’ online lives.

Résumé screening tools especially embody the discriminatory attitudes that are programmed into them. The problem rarely gets picked up because the companies involved have no incentive to tell anyone that the software they built doesn’t work as it should. In fact, many AI companies require their buyers to sign non-disclosure agreements to this effect.

Tools that scan employees’ social media activity are equally alarming. Candidates can be marked down just for ‘liking’ posts that mention alcohol or include bad language. Defenders of AI in this context argue that the tools being used are simply too primitive: more sophistication is needed, rather than less dependence on the algorithm. The technology used in this process is very similar to that notoriously used by Cambridge Analytica to build psychometric personality profiles of voters and then target individuals to persuade them to vote a certain way.

Schellmann follows the case of one individual who got £8,000 compensation from Bloomberg after the company rejected his job application entirely on the basis of a score he got in an online game that he was required to play: the company freely admitted that the rest of his application, including his experience and skills, were not even glanced at. The applicant’s lawyer said that the payment was a clear admittance of the company’s wrongdoing.

AI-assessed video interviews also produce random results. One AI tool ranked candidates highly if they sounded convincing, irrespective of whether they actually answered the question asked. The author herself read a text in German during an interview and was awarded  a score of 6 out of 9 for English competency by the AI. She then repeated the experiment, answering all of the questions in German, reading a text from Wikipedia. “To my surprise, I don’t get an error message. Instead, the AI scores me a 73 percent match for the role, even though I didn’t speak a word of English and the things I said in German had nothing to do with the job.”

Again, such defects could be attributed to ‘teething troubles’ in the programme rather than use of AI itself. But there’s a deeper problem, as the author explains: “The underlying assumption is a form of essentialism: that humans have a stable character and personality traits that can be categorized and compared, and, more importantly, that we reveal our internal character traits and emotions through physical appearance.”

But this is no more valid than 19th century attempts to find the “face of criminality” by superimposing photos of convicted criminals, or more recent efforts to gauge people’s personality traits from their handwriting – the pseudo-science of graphology. Nor does an AI-led ‘one size fits all’ approach make any allowance for people with disabilities, which in practice means companies may be breaking the law.

For those who are in work, AI surveillance also piles on the pressure. In one workplace, if the employee didn’t move her mouse for sixty seconds or tap a key on her keyboard, the AI monitoring deemed she was on ‘idle time’ and penalised her accordingly. At the end of every day her supervisors sent out an email ranking the more than twenty workers on her team from “most productive” to “least productive.” Demeaning and stressful doesn’t begin to describe it.

Today eight of the ten largest private US employers track the productivity metrics of individual workers. Research suggests that such monitoring has the opposite effect to what is intended – the more surveillance, the more employee performance goes down, especially as workers spend more time trying to prove they are productive, rather than getting on with the actual work.

The rise in the collection of physical health data at work, using fitness trackers, is a further cause for concern, especially as it can be sold on without the consent of the workers involved. Even creepier is the growing use of brainwave detection tools to monitor the mental state of employees and improve their efficiency.

What does the future hold? Perhaps we won’t even need to apply for jobs, as algorithms will simply match us to positions based on the data exhaust we leave behind. Of course, “that would be the end of privacy as we know it.”

The collection of data from social media about workers without their knowledge has worrying implications. Could a teacher getting angry with someone on Twitter or getting a negative review from a student lead to them being fired or blacklisted? This raises again the issue of what data about people is actually worth harvesting – and whether it is AI itself or the all too fallible programmers who are the real problem in this dystopia.

The author correctly identifies one major source of the problem as the monetary pressure involved in building AI tools. “What if we could start a not-for-profit organization that will test and build AI tools in the public interest?” she asks.

Ultimately, the art of predicting whether would-be employees will bring success to a business is about as accurate as predicting any other aspect of the future. There’s no clear reason why AI will do that better than any traditional model.

Mike Phipps’ book Don’t Stop Thinking About Tomorrow: The Labour Party after Jeremy Corbyn (OR Books, 2022) can be ordered here.