Zum Inhalt springenZur Suche springen

„Bias Preservation in Fair Machine Learning“ | Dr. Brent Mittelstadt, University of Oxford/Oxford Internet Institute

HeiCADLectures

Please enter the lecture room with the following link:
https://hhu.webex.com/hhu/j.php?MTID=mfd189989fa09f2d3b825ba9aedc991df

Abstract: Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups. Recognising this problem, much work has emerged in recent years to test for bias in machine learning and AI systems using various fairness and bias metrics. Often these metrics address technical bias but ignore the underlying causes of inequality and take for granted the scope, significance, and ethical acceptability of existing inequalities. In this talk I will introduce the concept of “bias preservation” as a means to assess the compatibility of fairness metrics used in machine learning against the notions of formal and substantive equality. The fundamental aim of EU non-discrimination law is not only to prevent ongoing discrimination, but also to change society, policies, and practices to ‘level the playing field’ and achieve substantive rather than merely formal equality. Based on this, I will introduce a novel classification scheme for fairness metrics in machine learning based on how they handle pre-existing bias and thus align with the aims of substantive equality. Specifically, I will distinguish between ‘bias preserving’ and ‘bias transforming’ fairness metrics. This classification system is intended to bridge the gap between notions of equality, non-discrimination law, and decisions around how to measure fairness and bias machine learning. Bias transforming metrics are essential to achieve substantive equality in practice. I will conclude by introducing a bias preserving metric ‘Conditional Demographic Disparity’ which aims to reframe the debate around AI fairness, shifting it away from which is the right fairness metric to choose, and towards identifying ethically, legally, socially, or politically preferable conditioning variables according to the requirements of specific use cases.

 

Brent Mittelstadt is a Senior Research Fellow in data ethics at the Oxford Internet Institute, a Turing Fellow at the Alan Turing Institute, and a former member of the UK National Statistician’s Data Ethics Advisory Committee. His research addresses the ethics of algorithms, machine learning, artificial intelligence and data analytics (‘Big Data’). Over the past five years his focus has broadly been on the ethics and governance of emerging information technologies, including a special interest in medical applications.  

Dr Mittelstadt's research focuses on ethical auditing of algorithms, including the development of standards and methods to ensure fairness, accountability, transparency, interpretability and group privacy in complex algorithmic systems. His work addresses norms and methods for prevention and systematic identification of discriminatory and ethically problematic outcomes in decisions made by algorithmic and artificially intelligent systems. A recent paper on the legally dubious 'right to explanation' and the lack of meaningful and accountability and transparency mechanisms for automated decision-making in the General Data Protection Regulation, co-authored with Dr Sandra Wachter and Prof. Luciano Floridi, highlights the pressing need for work in these areas.

Veranstaltungsdetails

20.10.2021, 17:30 Uhr - 19:00 Uhr
Verantwortlichkeit: