Jump to contentJump to search

Use Case Law

Normative Limits of Final Decision-Making by Artificial Intelligence - From Ethics to Law?

In the second funding phase of the Manchot Research Group, the newly added Use Case Law will not only complement the three existing Use Cases Health, Business and Politics. Even more, it will deal with fundamental and elementary questions of machine-aided decision making, which are essential for the adequate evaluation and handling of processes shaped by AI, as they are investigated and applied in the other three use cases.

The more machine intelligence conquers the hitherto very own domain of human decision-making, the more urgent it becomes to determine whether there (should) still be remaining, genuinely human decision-making reservoirs and how the vision of a human principled AI can be guaranteed in the long term.

What perspectives arise from the search for a normatively reserved, human ultimate responsibility?

Against this background, the Use Case Law is centrally concerned with two interlinked guiding research questions. On the one hand, the legalization process of ethical guidelines for AI will be examined on different levels. In this context, the current paradigm shift - the transformation process from ethics to positive law - is to be taken into account. It will be analyzed which, how, how far and why the more than 200 different AI ethical guidelines and codes coagulate into positive law in the national and international context. The second interrelated guiding research question deals with normative delegation boundaries vis-à-vis AI. Accordingly, it is investigated whether certain existential decisions must or should be reserved exclusively for humans, or whether certain decisions may not be delegated to AI or may only be (pre)determined by AI.

Responsible for the content: