The project "Fair Enough" deals with how the fairness of learning analytics systems can be validated and audited.
The social consequences of algorithmic decision processes are determined by the combination of the implemented algorithmic processes with the used data and user behavior. While the fairness of algorithmic processes in computer science is primarily judged by quantitative, formal-analytical standards, which cannot all be fulfilled at the same time, users judge the fairness of algorithmic processes rather on their subjective, individual perception and social norms. Therefore, the project investigates the topic from two complementary sides:
(a) Development of practical methods to assess the fairness of learning analytics systems and data (HTW Berlin),
b) Investigation of the requirements and expectations of users regarding the fairness of learning analytics systems (HHU Düsseldorf).
As a result of the project, a tool in the form of a 6-step guideline for the examination of learning analytics systems with regard to their fairness will be developed, which takes into account both the fairness of the system from data and algorithms as well as the usage process of the statements made by the system.