via Future of Privacy Forum
What if your boss was an algorithm? Imagine a world in which artificial intelligence hasn’t come for your job – but that of your manager: whether it’s hiring new staff, managing a large workforce, or even selecting workers for redundancies, big data and sophisticated algorithms are increasingly taking over traditional management tasks. This is not a dystopian vision of the future. According to Professor Jeremias Adams-Prassl, algorithmic management is quickly becoming established in workplaces around the world.
We aren’t necessarily defenceless or impotent in the face of machines – and might even want to (cautiously) embrace this revolution.
Should we be worried? Last month’s A-level fiasco has shown the potential risks of blindly entrusting life-changing decisions to automation. And yet, the Oxford law professor suggests, we aren’t necessarily defenceless or impotent in the face of machines – and might even want to (cautiously) embrace this revolution. To work out how we should go about regulating AI at work, he has been awarded a prestigious €1.5 million grant by the European Research Council.
This will require a serious rethink of existing structures. Over the course of the next five years, Professor Adams-Prassl’s project will bring together an interdisciplinary team of computer scientists, lawyers, and sociologists to understand what happens when key decisions are no longer taken by your boss, but an inscrutable algorithm.
Employers today can access a wide range of data about their workforce, from phone, email, and calendar logs to daily movements around the office – and your fitbit. Even the infamous 19th century management theorist Frederick Taylor could not have dreamt of this degree of monitoring. This trove of information is then processed by a series of algorithms, often relying on machine learning (or ‘artificial intelligence’) to sift data for patterns: what characteristics do current star performers have in common? And which applicants most closely match these profiles?
What we’re seeing now is a step change: algorithms have long been deployed to manage workers in the gig economy, in warehouses, and similar settings. Today, they’re coming to workplaces across the spectrum, from hospitals and law firms to banks and even universities.
‘Management automation has been with us for a while’, notes the professor. ‘But what we’re seeing now is a step change: algorithms have long been deployed to manage workers in the gig economy, in warehouses, and similar settings. Today, they’re coming to workplaces across the spectrum, from hospitals and law firms to banks and even universities.’ The Covid-19 pandemic has provided a further boost, with traditional managers struggling to look after their teams. As a result, the algorithmic boss is not just watching us at work: it has come to our living rooms.
That’s not necessarily a bad thing: algorithms have successfully been deployed to catch out insider trading, or help staff plan their careers and find redeployment opportunities in large organisations. At the same time, Professor Adams-Prassl cautions, we have to be careful about the unintended (yet often entirely predictable) negative side effects of entrusting key decisions to machine learning. Video-interviewing software has repeatedly been demonstrated to discriminate against applicants based on their skin tone, rather than skills. And that sophisticated hiring algorithm may well spot the fact that a key pattern amongst your current crop of senior engineers is that they’re all men – and thus ‘learn’ to discard the CVs of promising female applicants. Simply excluding gender, race, or other characteristics won’t cure the problem of algorithmic discrimination, either: there are plenty of other datapoints, from shopping habits to post codes, from which the same information can be inferred. Amidst a burgeoning literature exploring algorithmic fairness and transparency, however, the workplace seems to have received scant attention.
Understanding the technology is key to solving this conundrum: what information is collected, and how is it processed?
Existing legal frameworks, designed for the workplace of the last century, struggle to keep pace: they threaten to stifle innovation – or leave workers unprotected. The GDPR prevents some of the worst instances of people management (no automated sacking by email, as is the case in the US) – but it’s nowhere near fine-grained enough a tool. Understanding the technology is key to solving this conundrum: what information is collected, and how is it processed?
‘There’s nothing inherently bad about the use of big data and AI at work: beware any Luddite phantasies’, the professor insists. But employers should tread carefully: ‘Yes, automating recruitment processes might save significant amounts of time, and if set up properly, could actively encourage hiring the best and most diverse candidates – but you also have to watch out: machine learning algorithms, by their very nature, tend to punish outliers.’
Backed by the recently awarded European Research Council (ERC) grant, his team will come up with a series of toolkits to regulate algorithmic management. The primary goal is to take account of all stakeholders, not least by promoting the importance of social dialogue in reshaping tomorrow’s workplace: the successful introduction of algorithmic management requires cooperation in working out how best to adapt software to individual circumstances, whether in deciding what data should be captured, or which parameters should be prioritised in the recruitment process.
It’s not simply a question of legal regulation: we need to look at the roles of software developers, managers, and workers. There’s little point in introducing ‘AI for AI’s sake’, investing in sophisticated software without a clear use case. Workers will understandably concerned, and seek to resist: from ripping out desk activity monitors to investing in clever FitBit cradles which simulate your workout of choice.
‘There’s no such thing as the future of work’, concludes Professor Adams-Prassl. ‘When faced with the temptation of technological predeterminism, always remember to keep a strong sense of agency: there’s nothing inherent in tech development – it’s our choices today that will ensure that tomorrow’s workplace is innovative, fair, and transparent.’
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
- Harnessing The Power Of Artificial Intelligence In Accountingon March 5, 2021 at 5:10 am
Instead of fearing technological advancements, CPAs should embrace them and find ways to augment their skills rather than replace them.
- MagicMed Industries Announces Collaboration with Artificial Intelligence Thought Leaders and the Launch of PsyAI™on March 4, 2021 at 5:52 am
PRNewswire/ - MagicMed Industries Inc. (CSE: MGIC reserved) (" MagicMed " or the " Company ") announced today a collaboration among the Company, Dr. Suran Goonatilake and Dr. Eric Nyberg, renowned ...
- Artificial Intelligence in Healthcare Market worth $45.2 billion by 2026, at a CAGR of 44.9%on March 3, 2021 at 3:25 pm
According to the new market research report on the “Artificial Intelligence in Healthcare Market with Covid-19 Impact Analysis by Offering (Hardware, Software, Services), Techno ...
- WISeKey Increases Brand Protection with the Combination of Artificial Intelligence and NanoSeal Ecosystemon March 3, 2021 at 9:14 am
WISeKey Increases Brand Protection with the Combination of Artificial Intelligence and NanoSeal EcosystemWith the recent acquisition of AI ...
- Cision Brings PR, Social Media Management and Digital Consumer Intelligence Together with Category-Defining Acquisition of Brandwatchon March 2, 2021 at 11:00 pm
Cision, PR Newswire's parent company, announced on February 26 that it has entered into a definitive agreement to acquire Brandwatch, a global leader in digital consumer intelligence and social media ...
Go deeper with Google Headlines on:
Go deeper with Bing News on:
- SEBI lays guidelines on votes cast by mutual fundson March 5, 2021 at 7:27 am
In case of the mutual funds having no economic interest on the day of voting, Sebi said it may be exempted from compulsorily casting of votes.
- Sebi lays out guidelines on votes cast by mutual fundson March 5, 2021 at 6:11 am
Markets regulator Sebi on Friday came out with guidelines on votes cast by mutual funds to further improve transparency and encourage such fund houses to diligently exercise their voting rights in ...
- New York issues risk management guidelines for cyber-liability insurerson March 3, 2021 at 9:25 am
Responding to escalating cyber insurance claims, the New York State Department of Services (NYDFS) has published guidance for property and casualty insurers who write su ...
- omniQ's Traffic Management Solution Advances to Operational Phase of the Piloton March 2, 2021 at 9:57 pm
Q's Traffic Management Solution Advances to Operational Phase of the Pilot By CIOReview - omniQs patented Neural Network algorithms HOV Traffic Management product was approved to innovate to the pilot ...
- Human instinct can be as useful as algorithms in detecting online 'deception'on March 2, 2021 at 9:04 am
Travelers looking to book a hotel should trust their gut instinct when it comes to online reviews rather than relying on computer algorithms to weed out the fake ones, a new study suggests.