Usage of AI in Sales Forecasting: The Role of Origin, Accuracy and Explainability
Increasing data availability allows sales forecasting using artificial intelligence (AI). These methods are characterized by high accuracy, but are also black boxes insofar as their forecasts cannot be explained, cannot be justified. Users are faced with the dilemma of having to trust forecasts that are very accurate, but often completely intransparent. It is argued that the acceptance of AI-based predictions arises from performance: if the forecasts are correct, they will be accepted by the users. This view is questioned, in particular by practitioners and is also reflected in the (still) very limited use of AI-based forecasting tools in management accounting. This situation raises several questions: Does forecast accuracy actually generate acceptance of AI-based forecasts in management accounting? Is there such a thing as a “trust handicap” with regard to the origin of a forecast, i.e. do human-made forecasts receive higher trust, regardless of their forecast accuracy? Do explanations that elaborate how an AI-based forecasting tool operates or how a particular forecast came about increase the acceptance and actual use of forecasts? Our experimental study will examine how practitioners deal with sales forecasts with different origins - human vs. artificial intelligence - and whether additional factors, such as accuracy and supplementary explanations, influence the actual use of these forecasts.