Algorithmic Video Analyses in Recruiting Processes and Possible Consequences
Algorithmic decision making is increasingly utilized in the recruitment process. The goals of implementing algorithmic decisions in these processes are, among other things, saving costs and increasing efficiency and objectivity. Regarding the latter, algorithmic decision-making could be fairer and more objective than human decision-making due to prejudice and subjective convictions. However, there are also cases where the question arises whether algorithmic decision-making is really as objective and fair as assumed. The aim of this study is, therefore, to determine whether the use of algorithmic decision-making leads to discrimination against certain groups of people and reinforces existing inequalities and biases. Therefore, we analyzed an existing data set, consisting of several thousand video clips and the three winning algorithms. Our analysis shows that the underrepresentation of groups of people in terms of gender and ethnicity in the training data set leads to biases. Besides, existing inequalities in the data set are replicated by the algorithm. The study provides evidence for possible negative consequences of algorithmic decision making, such as the risk of discrimination, and thus provides important practical and theoretical implications.