This seminar combines experiments with administrative data to understand what drives gender bias in subjective performance evaluations in the technology industry.
Subjective performance evaluation is an important part of hiring and promotion decisions. We combine experiments with administrative data to understand what drives gender bias in such evaluations in the technology industry. Our results highlight the role of personal interaction. Leveraging 60,000 mock video interviews on a platform for software engineers, we find that average ratings for code quality and problem solving are 12 percent of a standard deviation lower for women. We use two field experiments to study what drives these gaps. Our first experiment shows that providing evaluators with automated performance measures does not reduce gender gaps. Our second experiment compares blind to non-blind evaluations without video interaction: There is no gender gap in either case. These results rule out traditional models of discrimination. Instead, we show that gender gaps widen with extended personal interaction, and are larger for evaluators from regions where implicit association test scores are higher. This dependence on personal interaction provides a potential reason why audit studies often fail to detect gender bias.
Link to paper: Decoding Gender Bias: The Role of Personal Interaction
Event Speakers

Ashley Craig
Senior Lecturer at ANU Research School of Economics