Artificial intelligence (AI) is on everyone’s mind. But opaque AI applications could also potentially cause damage. Therefore, by 2023, over 75% of large companies will have hired their own artificial intelligence specialists in areas such as IT forensics and data protection to reduce brand and reputation risk for their businesses.
With the help of augmented analytics, automatically generated insights and models are increasingly being used. However, the explanatory nature of these insights and models (e.g. their derivation) is crucial for trust, compliance with legal requirements and the management of brand reputation. Because inexplicable decisions, which are made by algorithms, certainly do not trigger enthusiasm for most people. In addition, some AI applications are said to reinforce certain prejudices they “learn” from training data.
Explainable AI describes models whose strengths and weaknesses can be identified. The probable behavior of such a model can be predicted as well as possible distortions. An explainable AI thus makes decisions of a descriptive, predictive or prescriptive model more transparent. This way, important factors such as the accuracy, fairness, stability or transparency of algorithmic decision-making can be ensured.