As automated data analysis supplements and even replaces human supervision in consequential decision-making (e.g., pretrial bail and loan approval), there are growing concerns from civil organizations, governments, and researchers about potential unfairness and lack of transparency of these algorithmic systems. To address these concerns, the emerging field of ethical machine learning has focused on proposing definitions and mechanisms to ensure the fairness and explicability of the outcomes of these systems. However, as we will discuss in this work, existing solutions are still far from being perfect and encounter significant technical challenges. In particular, I will discuss two practices from the context of fair and interpretable machine learning, where wrong technical assumption come at a high social cost. I will conclude the talk by showing that, in order for ethical ML, it is essential to have a holistic view of the system.
Isabel Valera is a full Professor at the Department of Computer Science of Saarland University, Saarbrücken (Germany) and an independent research group leader at the MPI for Intelligent Systems in Tübingen (Germany). Prior to this, she worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK), and obtained her PhD and MSc degrees from the University Carlos III in Madrid (Spain). She has held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. She is regularly Area Chair for the main conferences on machine learning (NeurIPS, ICML, AISTAST, AAAI and ICLR) and has been Program Co-Chair of ECML-PKDD 2020.