Add transparency and explainability to the ML models used on our platform

A computer monitor displaying a research paper titled 'Guidance for Industry: Detecting Products for Weight Management' on a purple background, with a keyboard, mouse, and graphics tablet below.

Overview

The Science Engine is a robust platform that allows our users to apply ML models to their workflows.

Our users must understand how these models work to trust their outputs. It is critical for any product that leverages AI/ML to have an ethical approach to transparency and explainability. I was responsible for leading this project's design efforts, including problem definition, ideation through design refinement, contributing to the user testing plan, and handing off the high-fidelity design specs.

Role

Lead UX Designer

Timeline

3 months - on going

The challenge

How can we improve the transparency and explainability of our machine learning models to build trust in the platform?

Science Engine

Icon of a cylinder with a lid, representing a can or container.

Add data

  • Users bring in many different types of data from various locations

  • They can manipulate some of the data coming into our product

  • They have access to public data sources

A logo with a 3D cube and the letter Y in the center.

Apply ML models

  • Users apply ML models through a formula bar

  • They connect the models to the data sources

  • The model’s parameters can be manipulated by the user

Icon of a light bulb with rays radiating from it, symbolizing an idea or innovation.

Get insights

  • Discover connections among larger quantities of information

  • New ways of searching and filtering through mountains of data

  • Perform powerful functions such as batch object extraction

Project background

As a product that allows users to apply AI models to their data, we must allow users to achieve their goals by better understanding how an AI system works throughout definition, development, deployment, and ongoing interaction.

There is a need to uphold Microsoft's Transparency principle for AI.

Microsoft AI Responsible AI Positioning Framework title page with a black background and a colorful abstract network graphic at the bottom.

AI & ML

Diagram showing different categories of machine learning, including decision trees, linear regression, K-means clustering, PCA for dimensional reduction, and neural networks for deep learning.

Black-box models

Black-box models, such as Neural Networks, often provide excellent accuracy but the inner workings of these models are harder to understand.

Diagram illustrating a black-box model with input on the left and output on the right, showing the black-box converting input into output.

Project goals

Interpret

Include information that allows users to understand complicated machine learning functions as much as possible

Trust

Provide our customers with information about the intended uses, capabilities, and limitations of our AI platform services

Prevent

AI system behavior should be understood so people can identify potential performance issues, safety, and privacy concerns, biases, exclusionary practices, or unintended outcomes

Target users

Two scientists working in a laboratory, with one pointing at a computer screen displaying scientific data or images.

SME Scientists

  • Discovering connections

  • Validating a hypothesis

  • Identifying meaningful signals

  • Making informed predictions

A man wearing glasses in a business suit analyzing a line and bar graph on an iMac computer in an office setting.

Info Investigators

  • Finding something similar

  • Comparing resources

  • Summarizing various resources

Collaboration

The Science Engine is a powerful platform that allows our users to apply ML aggressive milestone

Applied Scientists

Syncing with our Machine Learning Scientists to understand the problem and what data we could surface

A diagram with grouped green boxes containing terms related to machine learning model development and management, such as model description, limitations, output explanation, model goals, model owner, design choices, owner contact, model inputs, release date, model outputs, version info, data processed, training data source, definitions, permission, and performance metrics.

User Research

Collaborating with our research team to pin point the real user problem and help facilitate testing exercises

Screenshot of a research review document with sections on eligibility criteria, drug indication, and model description, including highlighted text and hyperlinks related to clinical trials and drug assessment.

Design Team

Reviewing design concepts with the fellow designers on my team allowed for a new design library component to be created

A presentation slide titled 'Model Transparency' illustrating key concepts: Identified Entities, Meta Data, Grid Reader, and Legend, with a background of thumbnail images of computer screens showing various data and interface elements.

Program Management

Prioritizing feature characteristics for a first version of this project by outlining what we were able to surface to the user by the time of the targeted release

A chart with two columns of labeled items. The left column has green labels: 'Model description', 'Model owner', 'Owner contact', 'Release date', 'Version info', 'Training data source', 'Permission', and 'Performance Metrics'. The right column has red labels with some crossed out: 'Limitations', 'Output explanation', 'Model goals', 'Design choices', 'Model inputs', 'Model outputs', and 'Definitions'.

Expanding the scope

The original task assigned to me was for a simple design to show model output information.

After speaking with stakeholders and learning about constraints, requirements, and user needs, I pushed for a more comprehensive solution.

Screenshot of a project management form titled 'AI Arch & Strategy.' The form includes fields for bucket, progress, priority, start date, due date, and notes, with explanations and instructions in the notes section.

Progressive disclosure

Computer screen displaying a web browser with a search results page.

The formula drop-down

  • Supporting model descriptions as the user types in the formula bar

  • Light-weight information giving several key meta-data points

Screenshot of a digital document or webpage with three columns of text, partially covered by a semi-transparent overlay and a purple border surrounding the right column.

The model
transparency panel

  • Shows information on the model that is applied to the page currently in view

  • It gives much more supporting model information

  • The user has access to model information and performance metrics

Computer screen displaying a blurred user interface with a pop-up window showing profile information.

The model catalog

  • The most information on models

  • It contains all information on all models on our platform

  • Users can deep-dive into specific models and explore new ones

Design exploration

Refining the designs of each touchpoint through testing with customers and getting sign-off from key stakeholders

A collage of multiple screenshots of a software interface displaying various data entries, model descriptions, performance metrics, and technical details in a structured format.
A computer monitor displaying a document titled 'Guidance for Industry' with text and sections visible, on a purple background with a keyboard, mouse, and trackpad placed in front of the monitor.

Final design

Using progressive disclosure in our design approach, we identified vital touchpoints a user has with models in our product. This solution helps provide the right amount of information at the correct time and allows the user to dive deeper depending on their need.

Screenshot of a spreadsheet or software interface with a formula bar showing '=TrendLine|' and a dropdown menu for selecting a formula. The interface includes a description label pointing to the right side of the screen that reads 'Model description'. The background is purple with a lighter panel for the interface elements.

Formula drop-down

Showing a small amount of information about the model entered in the formula bar

Screenshot of a research data analysis interface with a purple background theme, showing a document titled "Cursus eum sollicitudin massa ymo zelus," with a sidebar listing PDF files, search bar with the term "creatinine," and a right panel displaying model transparency details.

Right-side transparency tray

Giving the user data on the applied model such as model description and performance metrics

Screenshot of a computer screen showing a data analysis software interface with a pop-up window titled 'SentimentAnalysis' that includes model details, description, discussion, and example formula.

Model Catalog

The deepest dive of info related to all the models users have access to on our platform

Information panel

The transparency tray expands out from the right side without obscuring any workplace content so the user can read model info inline with their work

Tray content

The model transparency tray groups information into general info such as description, owner, testing/training data, etc as well as performance metrics

Exploring model catalog

Users can navigate through the model catalog to discover new models and view all the data we have about any specific one

Impact

  • New platform functionality that supports Microsoft’s mission of responsible AI

  • We created a new shared design system component

  • Increased the transparency of our models

  • Became the point person for all things model transparency related

Lessons Learned

  1. Show the right info at the right time

  2. Users are more forgiving if they understand how a model works

  3. Transparency is an ongoing process