Within the second funding phase, the main research objective of the Use Case Business is to better understand the design of human-machine interaction when using AI in business decision-making. More specifically, the aim is to understand the frequently observed preparer-user gap, i.e., the gap between AI as decision-supporting information technology in firms compared to its actual use. To date, there is a lack of validated empirical evidence as to why AI is often not used to the extent of its full potential. In answering this question, the both subprojects Financial Governance and Compliance and Social Governance and Compliance within the Use Case Business tie in directly with the first funding phase.
Trust in AI as a prerequisite for its use
In many firms, AI is increasingly being used to underpin complex business decisions, for example in forecasting sales, earnings or other key performance indicators and market developments, in the assessment of economic risks and opportunities, in the selection and performance appraisal of employees, as well as in the anticipation and detection of errors, technical deficiencies or compliance violations. However, for AI-based information to actually become relevant for decision-making, trust in its accuracy and appropriateness is an essential prerequisite.
During the first phase of the project, research results in the Use Case Business as well as the exchange with business practice, e.g., in the context of the AI Conference 2020 conducted by the research group, has shown how significant and at the same time little researched the question of trust in AI is as a precondition for its acceptance and use in companies. In order to address this research gap as comprehensively as possible, the research in the two subprojects Financial Governance and Compliance and Social Governance and Compliance is linked by two cross-cutting topics in which the related questions are investigated in a complementary way from both behavioral and neuroeconomic perspectives.
How will AI become credible?
The preliminary work from the first funding phase has shown that understandability, comprehensibility and explainability of AI-based information can play an important role with regard to its acceptance. The starting point for the research in the second funding phase is therefore to investigate the effects of recent technological developments in AI, such as "Explainable AI", in order to provide recommendations for the design of human-machine interaction in companies.