More than one in 10 workers will try to trick artificial intelligence (AI) systems used to measure employee behaviour and productivity, according to analyst Gartner.
While demand for remote employee monitoring systems has increased significantly during the lockdown, the use of such technology raises questions over privacy, ethics and whether such systems are actually needed.
“Many businesses are making a permanent shift to full- or part-time remote work, which can be both costly and require cultural changes,” said Whit Andrews, distinguished research vice-president at Gartner. “For management cultures that are accustomed to relying on direct observation of employee behaviour, remote work strengthens the mandate to digitally monitor worker activity, in some cases via AI.
“Just as we’ve seen with every technology aimed at restricting its users, workers will quickly discover the gaps in AI-based surveillance strategies. They may do so for a variety of reasons, such as in the interest of lower workloads, better pay or simply spite. Some may even see tricking AI-based monitoring tools as more of a game to be won than disrespecting a metric that management has a right to know.”
According to Gartner, some organisations have deployed AI-enabled systems to analyse employee behaviour. Such tools work in a similar way to the approach e-commerce sites analyse the behaviour of shoppers. Most offer basic activity logging with alerts. Gartner said more sophisticated versions can attempt to detect positive actions or misbehaviour analysis.
Many employers use productivity monitoring systems despite a high percentage of workers finding such tools unappealing. Even before the pandemic, Gartner’s research showed that workers feared new technologies used to track and monitor work habits. As these tools become more prevalent, organisations will increasingly face workers who seek to evade and overwhelm them, said Gartner.
“IT leaders who are considering deploying AI-enabled productivity monitoring tools should take a close look at the data sources, user experience design and the initial use case intended for these tools before investing,” said Andrews. “Determine whether the purpose and scope of data collection supports employees doing their best work. For those that do decide to invest, ensure that the technology is being implemented ethically by testing it against a key set of human-centric design principles.”
Last August, the Information Commissioner’s Office (ICO) began an investigation into Barclays’ use of remote monitoring software from Sapience to check up on staff. Sapience said its workplace analytics technology aggregates thousands of data points – digital output from every corner of the enterprise, every 15 seconds – to provide what it describes as “an unprecedented level of operational visibility” around people, processes and technology.
Barclays ran a trial of the system, but later stopped using it after a backlash from staff. The ICO has been looking at the extent to which Barclays infringed the privacy of its employees.
While the Barclays case highlights the negative impact of such technology, according to Gartner, workers may seek out gaps where metrics do not capture activity, accountability is unclear, or the AI can be fooled by generating false or confusing data. For example, said Gartner, such activities have already been observed in digital-first organisations, such as ride-share drivers sometimes working for two different services simultaneously as a way of maximising personal earnings.