Jump to contentJump to search

FAIR/HE

AI increasingly permeates social life, doubly challenging the education sector: firstly, it is supposed to train highly qualified AI experts; secondly, AI-based systems (learning analytics, drop-out detection) are leading to profound changes in research and teaching.

While advocates expect AI to improve the quality of education and strengthen the efficiency of universities, critics fear they could reproduce or even reinforce social inequalities. When decisions on access to education or academic success are increasingly taken by AIs, central questions of fairness, responsibility and transparency arise. This is why the interdisciplinary FAIR/HE project analyses the technological and social conditions needed to implement fair and pro-social AI systems at German universities.

We distinguish “two faces” of fairness: (1) objective fairness, and (2) perceived fairness. Cooperative research between computer scientists and social scientists is indispensable to adequately investigate both forms of (un)fairness and their interaction. The interdisciplinary FAIR/HE consortium contributes to preparing German universities for the challenges and opportunities of AI. The project will develop procedures and solutions for the fair handling of data, create tools to design non-discriminatory and understandable algorithms, and provide valuable insights into the cognitive and emotional reactions of those affected.

Responsible for the content: