There is no doubt that so-called "big data" and artificial intelligence can increase efficiency and productivity in the workplace enormously, but mathematical modelling used in various contexts without sufficient oversight can also lead to significant problems.
Horror stories abound about algorithms in the workplace, many of them justified. From hiring to firing, algorithms dictate which candidates are interviewed and subsequently hired, determine working pace or style, outline acceptable (and unacceptable) productivity and – in some instances – determine who will face the chopping block. All too often, it remains unclear the extent to which machines have had a hand in such decisions and, rightly or wrongly, this is becoming the norm.
On the other side of this equation (pun intended), there is still a lot of work that needs to be done in order for the inclusion of algorithms in all forms of managerial decision-making to be unbiased. This is not easy when "adding a touch of AI" is a sales phrase that means algorithms are often "off the shelf". These off-the-shelf solutions often lack the necessary feedback loops that could limit negative and unfair algorithms being left making harmful decisions.
The most clear-cut example of this, though rudimentary when thinking about today’s algorithms, is also a warning. In the 1980s, St George’s Medical Hospital School in London wanted to refine the lists of job applicants, due to overwhelming numbers. The AI was optimised towards efficiency – looking for candidates similar to those they’d previously hired – and overwhelmingly identified white males. A few years after its release, the British government’s Commission for Racial Equality found the medical school guilty of racial and gender discrimination.
It’s clear, then, that automating decision-making can lead to mistakes. Yet despite this knowledge, dystopian warnings of over-zealous control by machines are fast-becoming reality.
Liam Brown’s 2017 book Broadcast touches on a world where we all wear chips that optimise our productivity, and companies use this to leverage algorithms in order to know how good you are at your job. This world seems to have arrived. Peakon, an employee-retention platform, has become a fully-fledged "software as a service" for employee retention, claiming to tackle employee engagement with actionable insights to prevent employee problems before they arise. Now we’re all talking about "work life balance" and "mental health" in the same breath as seeking optimal productivity – so much so that even going to the loo a few too many times is potentially a red flag.
Already, in warehouses the world over, supposedly intelligent machines are managing humans and making work more gruelling, less rewarding and potentially dangerous in the process. In late 2019, Amazon put in place technology that could automatically fire the least productive employees. And nowhere is this risk more evident than in the gig economy, right now. Uber could not exist without algorithms, after all.
In recent times, Uber Eats’ couriers and drivers have blamed changes to the algorithm for slashing their incomes and losing their jobs. In an unsustainable quest for ever-increasing productivity, AI compounds power imbalances between management and worker.
Where human decision-making is based on creativity and lateral thinking, AI optimises to relatively specific objectives in comparison. Housekeepers at a hotel group in the US, for instance, complained that a new tool that optimised room assignments made their jobs harder and hotel guests less satisfied. By taking away the ability to organise their day, they were unable to prioritise cleaning current guests’ rooms when they popped out – clean rooms being something nice to return to. It also sent them walking far further around hotel floors making their work even more physically demanding. All too frequently, misuses of AI are having a disproportionate impact on lower-paid workers.
But it’s not all bad. AI can make us fairer. We can use AI to remove gendered words in job descriptions – something especially important in tech, where women are still grossly underrepresented. Companies are using AI to strip out factors identifying ethnicity, gender and religion on CVs. Allowing hiring manages to compare applicants based on their experience alone, and limiting the unconscious bias that can creep into the hiring process.
The fact is that understanding around the impact of algorithms on the world of work is very much in its infancy. When AI is trained to optimise and increase efficiency, it can completely overlook important considerations such as fairness, equality of opportunity and diversity.
So, we need to ask about the unseen cost, even while it speeds up processes. It’s important to evaluate the effectiveness of an algorithm on an ongoing basis. If there is no feedback loop, we don’t know who or what is being excluded and what negative impact may result in its implementation.
As always, data must be used alongside human oversight to iterate, to improve and to learn.
Leila Seith Hassan is head of data at Digitas