fired by code: how amazon is leading the way with a new kind of terminator
Back in the 1970s, comedian Bob Newhart imagined a world where machines fired machines for failing their performance reviews. Fast forward to today, and we’re getting awfully close to this being less of a punchline and more of a daily reality. But before machines fire other machines, it seems that they’re starting with humans at Amazon warehouses, where software which keeps track of whether human workers are meeting their quotas can recommend those who fail to hit their numbers for termination. According to court documents obtained by The Verge, it seems that more than 10% of Amazon’s workforce in some fulfilment centers may have been fired by robots, something which sounds reminiscent of a dystopian sci-fi novel.
Now, it should be noted that Amazon emphatically denies that robots are automatically firing workers, and says that terminations can be appealed, they put workers in performance plans, and support those flagged by the automated systems as struggling. This seems a bit suspect if you consider just many people are being flagged for termination by the automated systems and end up leaving shortly after, but it’s possible that the performance plan is more of a “we’ll give you one more chance to keep up” than an actual attempt to set and meet realistic metrics, so the company’s statement can be technically true. In the grand schemes of things, however, the future of work will inevitably involve robots making HR decisions.
We live in an era of quantified workspaces. We can track every hour of work, every task, every artifact, and do it to a level most of us would describe as extreme and overwhelming. With so many bosses still running offices the same way they learned to manage factories, much less the actual factories they may be running, there will definitely be a strong temptation to use robotic HR systems to constantly evaluate and micromanage workers much to the detriment of morale as already deeply unhappy and mismanaged employees are very demonstratively shown that they’re just cogs in a machine to those running the companies for which they work. And this will be even more problematic if the metrics picked for performing those evaluations are seen as arbitrary and unrealistic.
Not only will workers feel like the management treats them as less than human, but companies will inevitably be facing lawsuits claiming that managers manipulated metrics to fire employees they didn’t like for petty reasons, retaliate against whistleblowers, or cover up discrimination and cases of sexual harassment or abuse in the workplace. Let’s remember that AI systems can be every bit as biased as the people they’re meant to replace, if not more so, if they’re fed data which trains them to be bigoted or discriminatory so saying “it’s all done by a computer” won’t be a sound defense. That said, if given good training data with an appropriate set of metrics and meant to allow a two-way conversation between bosses and the workers they supervise, they could flag bad behavior to be addressed while highlighting genuine achievements that would otherwise go unnoticed.
But for those systems to be used correctly will take a sharp eye, good advice from experts, and auditing to make sure best practices are really being followed. That could be done as part of a regulatory effort, especially seeing how violations of those practices would often end up in a court or arbitration, but the threat of horrific PR, as well as the sheer cost of quantifying every possible action in a workplace would deter blatantly bad behavior, leaving regulators to deal with only the most egregious cases. Either way, automated HR is coming, and we need to take its design very seriously. Millions of livelihoods will depend on it and large-scale mistakes and bad implementations could easily trigger major recessions. We owe it to everyone who works for a living to get it right.