The hidden dangers of algorithmic performance management

HRD talks to associate professor Uri Gal about why data-oriented performance evaluation might not be as objective as we think

The hidden dangers of algorithmic performance management
People analytics relies on big data, but it also relies on algorithms to make decisions in an automated manner.   

The conventional assumption behind using algorithms is that they are objective, efficient and remove bias from the decision-making process, said associate professor Uri Gal, from the Discipline of Business Information Systems at the University of Sydney Business School.

For example, an increasing number of businesses are using these algorithms to identify the ideal job applicant.

Gal cited research that more than 70% of CVs today never get read by a human being, meaning they are commonly rejected by an algorithm.

“Ostensibly, the process is meant to be objective and rational,” Gal told HC.

“But the problem is that algorithms are not really objective in the sense that collecting big data actually involves a significant amount of human judgment.”

Consider a manager who wants to assess the performance of his or her employees over the previous year.

There are a number of ways which you can do that, including collecting data on how much revenue they have generated for the company over the last 12 months, how many clients they have interacted with or how many leads they have generated.

So we have three data points to access what we call ‘performance’, but why these three? We could have used any other combination of data points, said Gal.

“We could have used feedback from customers, feedback from colleagues or how much time they spent on email or how many days they were absent from work,” he said.

“There are a variety of different data points we could have used and the choice of data points inevitably involves human judgement.

“So it’s not really objective in the sense that people often think when they are talking about algorithmic decision making.”

Gal added that there is also the question of what weight should be attributed to each data point.

Going back to the previous example, we might have three different data points to assess the performance of employees. Do we give them equal importance?

“We might or we might decide that one data point is twice as important as the other one and the third one is only half as important as the second one, which again involves human judgement,” he said.

“So there is no one correct way to assess the performance of employees. This definitely involves human judgement which is subjective rather than objective. “

Gal’s research involved contacting a number of companies that produce and sell different types of people analytics systems.

He also initiated conversations and interviews with people who work at these companies and a range of their clients that use these technologies to manage their own workforce.

“It’s a pretty interesting phenomenon which to me is a little bit disconcerting because there is a general perception that managerial decision making needs to be exclusively based on the use of hard objective verifiable data,” he said.

“Inevitably, what these algorithms do is they try to construct models of human behaviour. These models are always going to be simplified versions of a very complex phenomenon which happens in reality, which is the way people behave in organisations.

“They are just traces that employees leave as they go about their day.”
 

Recent articles & video

Can a worker be employed by two companies for the same services?

Singapore's workforce ready for upcoming changes from AI: survey

Toshiba to lay off 5,000 employees in Japan: reports

Mercado Libre to hire about 18,000 people: reports

Most Read Articles

Singapore employers mandated to consider requests for flexi-work

Novartis to cut over 600 jobs amid global restructuring

Singapore hikes qualifying salaries amid foreign-local talent competition: report