Lab Manager | Run Your Lab Like a Business

News

Twopeople shaking hands after a job interview
iStock, nathaphat

Algorithms for Hiring: Bias In, Bias Out

There’s a better way to make algorithms more fair in dealing with race and gender

by University of Texas at Austin
Register for free to listen to this article
Listen with Speechify
0:00
5:00

In recent years, employers have tried a variety of technological fixes to combat algorithm bias—the tendency of hiring and recruiting algorithms to screen out job applicants by race or gender.

They may want to try a new approach, according to a new study by Maria De-Arteaga, Texas McCombs assistant professor of information, risk, and operations management. Even after algorithms are adjusted for overt discrimination, they may show a more subtle kind: preferring people who mirror dominant groups.

Get training in Creating an Environment of Success and earn CEUs.One of over 25 IACET-accredited courses in the Academy.
Creating an Environment of Success Course

For instance, when recruiting in a field that has more men, algorithms may favor people who more resemble masculine stereotypes. The researchers call this tendency “social norm bias.”

Such bias compounds existing patterns in the workplace, De-Arteaga says. “Social scientists have studied discrimination in which marginalized individuals are penalized for displaying characteristics that are thought to be typical of their group. We show that algorithms also have this bias.”

With Myra Cheng of Stanford University and Adam Tauman Kalai and Lester Mackey of Microsoft Research, De-Arteaga tested three common techniques for making algorithms fairer. They found stark differences in their effectiveness, with one approach not reducing social norm bias at all.

To help compensate for those deficiencies, the researchers also proposed a new technique: a formula to directly measure social norm bias in an algorithm so it can more effectively be corrected.

Predicting occupations

The roots of such biases are often in the data sets used to train algorithms, according to prior research. The data reflect existing workers. For example, surgeons tend to be White men.

To compensate, employers commonly use three types of interventions:

  • Pre-processing, which reweighs the data before it’s used.
  • Post-processing, which adjusts results for certain groups after the algorithm has been trained.
  • In-processing, which modifies the algorithm while it’s being trained.

To test how well those approaches work, De-Arteaga and her colleagues used a data set with 397,340 biographies, spanning 28 occupations. Because the biographies were in third person, each biography had a “she” or “he” pronoun associated with it. An additional data set of biographies used nonbinary pronouns.

The researchers applied the three different types of interventions. The central question: Would the adjusted results display social norm bias when using someone’s biography to predict their occupation?

Unfortunately, the bias persisted. For male-dominated occupations, the algorithm looked for language associated with men. If it didn’t find such language, it was less accurate in guessing a person’s occupation.

For example, the algorithm associated the word “empowerment” with women. Female surgeons who used the word were less likely to be identified as surgeons.

“When there is social norm bias, the individuals in the minority who benefit from an intervention will be those who most adhere to the social norms of the majority,” De-Arteaga says.

The least effective fix, she found, was post-processing. It had no effect on correcting for social norm bias.

“These types of intervention measures are easier and cheaper to integrate into a system because they do not require retraining the model,” De-Arteaga says. “But they do not mitigate social norm bias at all.”

Fairer algorithms

The findings have widespread implications for correcting algorithm bias, De-Arteaga says. Using current techniques, companies may think that they have addressed gender discrimination.

But because those techniques are based on rigid characteristics associated with a group, they don’t show the whole picture. They may penalize people who don’t fit stereotypes of the majority.

To help compensate for those problems, De-Arteaga and her colleagues propose a formula to directly measure the degree of social norm bias in an algorithm. Data science or machine learning departments could use the formula to guide algorithm selection, she says. “Companies can add these measures to their toolbox.”

The research applies to areas beyond the job market, she adds. Social norm bias is likely to exist in other algorithms, such as those involved in Social Security payments, health care, or deciding who can get a loan.

“This naturally extends to tasks other than occupation classification,” De-Arteaga says. “More work needs to be done to understand the extent of this bias in other domains and assess its consequences.”

- This press release was originally published on the University of Texas at Austin website