Advances in artificial intelligence, sensors and big data management have far-reaching societal impacts. As these systems augment our everyday lives, it becomes increasingly important for people to understand them and remain in control. The time is ripe to ensure that the powerful new autonomous systems have intelligible interfaces built-in.
Artificial Intelligence (AI) and Machine Learning (ML) algorithms process sensor data from our devices and support advanced features of various services that we use every day. With recent advances in machine learning, digital technology is increasingly integrating automation through algorithmic decision-making. Yet, there is a fundamental challenge in balancing these powerful capabilities provided by machine learning with designing technology that people feel empowered by. To achieve this, it's extremely important that people are able to understand how the technology may affect them in order to trust it, and feel in control.
Integral to the adoption and uptake of AI systems in real-world settings is the ability for people to make sense of and evaluate such systems. Such an area of study is known as XAI (Explainable AI). As AI systems become more commonplace in a number of everyday decision-making processes, the need for people to be able to understand and interpret these systems becomes paramount to the successful integration of human-AI relations.
The issue of interpretability or explainability of AI systems has received considerable attention in recent years. The prominence of deep learning methods in the last decade (which are inherently complex and opaque) has renewed the pressing to make AI systems discernible to human users. The number of works published in the past two years advancing XAI approaches is extensive). From visualizing internal information processing mechanics, to identifying feature importance or concept activations, and having one neural network explain another, XAI approaches are diverse.
Explainable AI focuses on mathematics to transform a complex or “black box” model into a simpler one or create mathematically interpretable models. While these are significant contributions, they tend to neglect the human side of the explanations and whether they are usable and practical in real-world situations. Often, the research does not appear to be strongly informed by Cognitive Psychology in terms of how humans can interpret the explanations and does not deploy or evaluate the explanations in interactive applications with real users. This is echoed by Shneiderman et al., who discussed the need for interfaces that allow users “to better understand underlying computational processes” and give users “the potential to better control their (the algorithms’) actions” as one of the grand challenges for HCI researchers.
What do people might need to understand about an AI system in order to act on and in concert with the system’s outputs? Who will use this system and what might they need to know to be able to make sense of its outputs? True progress towards explainable systems can only be made through interdisciplinary collaborations, where expertise from different fields (e.g., machine learning, cognitive psychology, human-computer interaction) is combined and concepts and techniques are further developed from multiple perspectives to move research forward. An HCI approach to XAI design can help us understand AI/interpretability problem spaces more intimately – opening up more imaginative zones of innovation, speculation, and design alternatives through which we might grasp at futures where people and AI systems are more closely aligned in the relations of everyday practice.