Depending on who you speak to, you might receive any number of predictions for the future of work. Lists, prophecies, and projections abound. However, one universal feature we see amongst them is the impact of advancing artificial intelligence and automation.
Mckinsey places it as one of four major changes in the future of work, along with increased connectivity, lower transaction costs, and fundamental societal shifts (particularly around employee values). The OECD estimates that 14% of jobs in OECD countries are likely to be automated and, perhaps even more interestingly, 32% are likely to be partially automated.
Typically, there is a subset of roles seen as liable for automation. However, even for those more mechanistic jobs, it’s now being forecasted that automation might not entirely replace the role but, instead, provide a supercharge to the human component. People will be working in greater proximity and collaboration with computer systems to increase efficiency and capability.
This means that, while it is true some roles will no longer exist, an even bigger question can be posed about how we will interact with automatized processes, artificial intelligence, and machine learning systems.
“…reframing the question of AI and the future of work around activities suggests that a useful strategy is to begin with tasks that comprise a job and imagine the computers doing the ones they can do best and people doing the ones they can do best. Taking such an approach means thinking less about people OR computers and more about people AND computers.” MIT 2020
Where exactly different tasks and jobs fall on this machine-human continuum will be context-dependent and fluctuate over time. An extreme, yet increasingly prominent, example of this today is people beginning to work not just with machines, but for machines, in an emerging space called “algorithmic management”.
So, what does this look like? How will people feel working for computers? Will our usual satisfactions and joys at work be dampened or elevated? How will the day-to-day be impacted? What can we learn from the early iterations of algorithmic management, and what does this mean for leaders operating in this space?
Algorithmic Management: Case Studies
We are witnessing the rise of what researchers are calling “algorithmic management”, where managerial functions are delegated to computer algorithms. Algorithmic management can track work processes to increase visibility, identify patterns for making predictions and optimising processes, and then automate decisions based on the data.
Today, we see algorithmic management most visibly in the “gig economy.” For example, the jobs of Uber and Lyft drivers are governed by computer algorithms that track where riders and drivers are, match them with one another, prescribe routes, and process payments. By integrating rider reviews, algorithms manage driver feedback, performance, and their access to the platform, which is automatically allowed or prevented based on their overall rating.
Beyond this, predictive algorithms are now commonly used in recruitment systems and work scheduling. According to their adopters, this is creating huge efficiency gains. But for employees, this early incarnation of algorithmic management is creating work that is at best increasingly meaningless, and at worst, actively harmful.
To understand why, it is helpful for us to explore what makes work meaningful in the first place. Decades of research in organizational psychology have shown that workers find meaning and fulfillment in their work when they:
- exercise skill variety
- encounter novelty
- work autonomously, and
- feel a sense of significance in themselves and in their work
In the way that it is being applied today, with a sole focus on business efficiencies, algorithmic management is eroding many of these fundamental employee needs.
By design, algorithmic management prefers simplicity over complexity. The simpler the tasks that workers carry out, the easier they are to measure, track, and compare. This preference inherently limits skill variety and novelty, by seeking to make work tasks as straightforward and predictable as possible.
To optimise efficiencies in these tasks, sensors are tracking employees with a high degree of specificity, looking to optimise their every move. This amounts to the hyper-surveillance of employees in a way that removes any semblance of autonomy in their work.
Amazon’s Warehouse
In 2018, Amazon patented a wristband that can precisely measure the location of a worker’s hands as they retrieve and deliver boxes in a fulfillment centre. The wristband provides ‘haptic feedback’ to the employee through vibrations that guide them to the correct shelves. It’s still not clear whether these wristbands have been implemented in practice. Nonetheless, as an anonymous fulfillment centre worker wrote for the Guardian in 2018: “Through the use of digital trackers and indicators, our workday is managed down to the second.”
In November 2021, a group of UK MP’s concluded in a report on AI at work that: “pervasive monitoring and target-setting technologies, in particular, are associated with pronounced negative impacts on mental and physical wellbeing as workers experience the extreme pressure of constant, real-time micro-management and automated assessment.” Moreover, it acknowledged that algorithmic management in this form had increased significantly since the start of the pandemic.
What recourse do these workers have against such intrusive technologies? Surely, if they have a problem with the working conditions, they can take it up with HR? Perhaps not. In 2019, it was revealed that Amazon’s algorithmic managers had not only tracked workers’ every move, but they had also automatically issued warnings and letters of termination to those employees who didn’t meet their productivity quotas, without any input from human supervisors.
Think about that: one of the most difficult interactions that leaders experience – letting an employee go – has become fully automated.
Whither goes the human?
So what does all this mean? Does the integration of algorithmic management inevitably spell the end of meaningful work? It may seem so from these examples. But optimising the economics of business processes does not have to be directly at odds with optimising the employee experience. What we are seeing currently may be simply an imbalance of attention, with the focus placed too squarely on the bottom line, to the detriment of employee wellbeing. It is up to the leaders of the emerging future to redress this imbalance and view the adoption of technologies holistically.
Like all technology, automation and algorithmic management is value-neutral until it is applied to the world. The responsibility rests on leaders to bring it to bear in a way that aligns with a broader purpose and set of values for the organisation. For example, Dr Matthew Beard and Dr Simon Longstaff from the Ethics Centre articulate eight principles for good technology which is ethical by design:
- Ought before can – The fact that we can do something does not mean that we should.
- Non-instrumentalism – Never design technology in which people are merely a part of the machine.
- Self-determination – Maximise the freedom of those affected by your design.
- Responsibility – Anticipate and design for all possible uses.
- Net benefit – Maximise good, minimise bad.
- Fairness – Treat like cases in a like manner; different cases differently.
- Accessibility – Design to include the most vulnerable user.
- Purpose – Design with honesty, clarity and fitness for purpose.
In the circumstances of the Amazon fulfillment centre, we might be questioning the way in which technology is being used in the context of the second and third principles, and expand the notion of “net benefit” (principle 5) beyond a narrow focus on economic value alone.
Moreover, algorithmic management could even be refocused to track, predict, and improve the well-being of workers. As Henri Schildt points out in his book The Data Imperative, companies already have deep insights into how computers can detect and shape human emotions. However, much of the application of this is devoted to their customers. As Professor Schildt argues, “spending even a fraction of the money that is being invested in the design of customer-facing interfaces to design employee experience could significantly improve algorithmic management”.
We are called to be implementing emerging technologies in a manner that is human-centred and aimed at uplifting human capacity, rather than narrowing it. Regenerative leaders will be those who see the potential for employing artificial intelligence towards the flourishing of the whole system.
Need More Help?
Keen to find out more about how your organisation can best approach and interact with automatised processes, artificial intelligence and machine learning systems? Performance Frontiers are experts in helping guide leaders to cultivate a range of creative and strategic practices within their teams to embrace the future of work with an expansive and regenerative mindset. Speak to Chris about how we can partner with you today to leverage emerging technologies in a manner that is human-centred and aimed at uplifting human capacity, rather than narrowing it.
Henri Schildt, The Data Imperative, Oxford: Oxford University Press, 2020.
Matthew Beard and Simon Longstaff, Ethical By Design: Principles for Good Technology, The Ethics Centre, https://ethics.org.au/ethical-by-design/#download-copy