TUC research points to employment downside of AI
TUC research has found that the use of ‘objective’ artificial intelligence (AI) is more likely to exacerbate discrimination and unconscious bias in the workplace and open up employers to complaints and legal challenges.
The study Technology Managing People – the legal implications, found that the Covid pandemic has accelerated interest in the use of AI applications for assessing and managing staff, supporting remote and digital forms of working and reducing the need for human contact. Employers, the study claims, are using AI to run performance management systems, check on activity, and used it to monitor facial expressions and tone of voice when assessing suitability for roles.
It also highlights examples from employers such as Amazon and Uber. Amazon had selected a specialist AI recruitment tool to find the best candidates, but its reliance on previous data meant the tool picked up on and magnified a bias, leading to a tendency to ignore applications from women. Uber was taken to court by drivers who claimed the decision to sack them had come from an unreliable AI algorithm.
The study concludes that there is a need for workplace regulation to take into account the potential for intrusion, and bias by technology that can’t be assumed to be fair or reasonable.