Building a machine learning platform aimed at increasing transparency in AI systems through human-friendly explanations

Responsive image

XaiPient

XaiPient’s mission is to make AI transparent and easy to understand by offering ways to inspect the “whys” behind AI systems’ decisions. A big problem of machine models today is that they are good at predicting certain outcomes with a high degree of accuracy, but fail at explaining how they arrive at those outcomes (a problem known as black-box). As more aspects of the world are automated by AI, knowing how AI models operate is crucial to prevent accidents and biases when these systems are deployed in the wild.

Approach

I joined the founding team for four months to help them turn years of technical research into a tangible product. I helped the product team make sense of the fast-growing space of interpretability (XAI) by mapping out different competitors, investigating potential new technologies, and identifying relevant issues regarding the design of explanations in a wide range of applications, and for different users and domains. After gathering insights from users, I produced prototypes to validate hypotheses about the value proposition of certain features and aspects of the product.

Problem

The biggest design challenge I faced was: how do we build human-friendly explanations that let users understand what influenced a machine decision without relying on technical concepts and complex data visualizations? Additionally, how do we communicate decisions effectively for different levels of literacy and use cases? Since the prediction made by a machine learning system is probabilistic, how do we communicate uncertainty without influencing user behavior? And finally, how do we prevent explanations from being misunderstood, potentially putting at risk those affected by these very systems?

Acheivements

After validating different hypotheses and gaining insights about different use cases, I shifted my focus towards ideating on solutions. I prioritized thinking in components and design patterns rather than worrying about flows and secondary navigation aspects of the application. This decision enabled the team to make progress on the core components of the product before the strategy was fully defined. I worked with developers and designers to create a highly customizable pattern library comprising different explanation methods and visualizations that matched different levels of data literacy, goals and needs.

Services

Product Design
User Reserach

Capabilities

Data Visualization Design
User-centered Design
Iterative Prototyping & Wireframing
Design System
User Testing & Market Fit
Product Stories & Storyboarding

Location

Princeton NY/Berlin

Duration

4 months

01User Research

The problem

A big problem of machine models today is that they are good at predicting outcomes with a high degree of accuracy, but fail at explaining how they arrive at those outcomes (a problem known as black-box).

Responsive image

Mapping out the interpretability space

I spent an entire month researching the space to find different potential strategies the product could explore. I documented these findings in the form of reports, evaluations, and recommendations aimed at helping the team to think strategically about how to shape the product, and what use cases and audiences to focus on. This allowed the founding team to transition from thinking about the development of the core technology to thinking in terms of user needs, product stories, and value proposition.

Responsive image

Interpretable machine learning is a complex problem that involves many disciplines, industries, and stakeholders

Responsive image

Experts interview and literature review

Since AI effects so many aspects of society today, I knew it was essential to include different voices and perspectives beyond the ones found in the team. Thinking about the problem through a computational lens only solved part of the puzzle. It was necessary to go beyond initial assumptions if we were to design solutions that addressed the design challenges holistically. To gain such a perspective, I interviewed cognitive scientists, statisticians, HCI researchers, and scholars involved in algorithmic fairness to uncover insights about the challenges and limitations of "explaining AI”, and its implications for society at large.

Responsive image

Key insights

Explainable AI today has a strong focus on mathematics and computer science to interpret “black box” models. While these are significant contributions, they tend to neglect the human side of the explanations and whether they are usable and practical in real-world situations.

Existing applications are focusing on offering complex data visualizations for data scientists and neglect a much larger user base comprising non-technical users (e.g. underwriters, regulators, and users working with machine learning algorithms in different applications to make predictions).

Recognizing this disparity, how can we develop visualizations and explanations that communicate the decision behind a prediction using natural language and non-technical components?

Responsive image

Use cases: where is interpretable machine learning the most valuable?

The need for interpretability stems from incompleteness in the problem formalization, creating a fundamental barrier to optimization and the system's evaluation.

  1. Scientific Understanding
    the goal is to gain knowledge, but we do not have a complete way of stating what knowledge is; thus, explanations can facilitate the conversion from data to knowledge.
  2. Ethics
    The user may want to guard against certain kinds of discrimination, and their notion of fairness may be too abstract to be encoded into the system (e.g., one might desire a ‘fair’ classifier for loan approval).
  3. Safety
    For complex tasks, the end-to-end system is rarely completely testable; one cannot create a complete list of scenarios in which the system may fail.
  4. Mismatched objectives
    Mismatched objectives: The algorithm may be optimizing an incomplete system objective— that is, a proxy function for the ultimate goal.

Understanding user needs

Decision-makers, scientists, compliance and safety engineers, data scientists, and machine learning researchers all come with different background knowledge and communication styles.

I also developed personas based on the user interviews and literature review I conducted. I created three distinct categories that related to the product directly: auditors, underwriters, and business users. I also crafted product stories to bring the design and development team on board with how explanations might be used, and what are the real stories and questions users are trying to answer. I mapped the requirements for the different explanation components; including tone of voice, depth of explanation, level of technical literacy, and UX requirements.

Responsive image

Developing a taxonomy of data visualization components and interpretability methods

To facilitate brainstorming sessions, I created a taxonomy that included different personas (first column), different high-level tasks, the different categories of explanations (UX), and the available methods used to explain methods (computer science). My goal was to have participants starting on the left (users) to the far right (technology). This way In the process, designers and developers could evaluate if the explanations offered a good experience. In addition, I created a second category that included time constraints involving a particular task, and the expertise different users might have.

My experience was that visualizing qualitative data helped the team to think creatively around certain constraints (user needs). By having different aspects of the product on the page (from user needs to the underlying technology) the team was able to realize the deficiencies of certain methods against different tasks and users, as well as the required technology to create an adequate user experience for different users.

Responsive image

Ideation and brainstorm sessions

Some of the concepts emerging from sessions were extremely technical, featuring heat maps and other highly dense visualization elements. Others were much simpler to understand, featuring counterfactual explanations ("why a certain prediction was made instead of another one"). Some of the most exciting ideas were based on logic programming and natural language processing and explored conversational AI models.

Responsive image Responsive image

Defining the product principles

The research I conducted also helped to clarify the design of the SaaS architecture. Specifically, developing a solution that could work with already trained models instead of requiring proprietary technology became obvious to the team after talking to different users to understand their workflow.

Responsive image

System architecture

Responsive image

03Final designs

Disclaimer: The solutions presented here are only illustrative and were not used in the final designs. Due to my NDA agreement, I'm not able to disclose the final project outcome. If you have further questions or want to know more about a specific design solution, please get in touch.

Responsive image
A caption for the above image.

Designing a pattern library of explainability components

Since aspects of the product strategy hadn't been defined, I suggested the team to focus on individual explanation components. Each component was focused on individual tasks and users, using different visualization techniques to communicate information. This way, components could be designed, validated, and tested without worrying about secondary aspects of the software architecture.

How might we communicate complex data-centric decisions to a non-technical audience? It was extremely important for me to think beyond regular data visualization components found in most data-centric applications (e.g. bar charts, plots, etc). I was interested in exploring the design of new components that were not intimidating, yet offering the right amount of information for the task and user literacy.

Responsive image
Responsive image
Responsive image
Responsive image
Responsive image
Responsive image
Responsive image

UI components

Responsive image

Color studies

Responsive image
Responsive image

Similar projects

Responsive image

Gradient

Designing the future of human-AI collaboration through transparent and easy-to-use interfaces

How can AI serve as a tool for cognitive augmentation? The project explores new ways humans can work with generative algorithms. I worked from UX research to designing and validating the data visualization system, and finally delivering a technical proof of concept.

→ Read case study

Get in touch

  • ricardoasaavedra@gmail.com