Skip to content

January 28, 2021

Artificial Intelligence, Design

Building Trust in AI Systems

COLLEEN MCCRETTON

All Posts

Bias in data used by AI algorithms is drawing increasing attention. The internet is full of examples of AI systems bias: recruiting algorithms trained on data that favored male candidates, facial recognition software unable to appropriately identify people of color, medical systems trained with data that is not sufficiently diverse, and many other examples. However, there is another aspect to bias that impacts AI systems and bears some scrutiny as well - the cognitive biases that users bring to the table.

The human brain processes a lot of information and often uses “shortcuts” to help classify information, which can create cognitive biases. People tend to believe data that supports their current thinking (confirmation bias), the first piece of data they see (anchoring) or to trust the first data they notice while ignoring other data (attentional bias). Each of these user biases can impact the effectiveness of an AI system. In a recent project, my FCAT team was tasked with processing data to uncover new insights for business’s stakeholders. In this project we found that the stakeholders believed the system when it confirmed their thinking but didn’t trust it when it presented new or different information.

As luck would have it, I was recently able to participate in a session for World Usability Day1 that focused on the topic of appropriate trust of users in AI systems. The consensus reached from the discussion was that transparency and explainability will be critical for future adoption – both for the user experience and ethical considerations of the systems. The discussion highlighted the notion of Explainable AI (XAI), which is a developing field that supports the notion that algorithms cannot be “black boxes”. The idea is gaining industry traction. For example, the Defense Advanced Research Projects Agency (DARPA) has an XAI research program2 that aims to further machine learning techniques that are equally high-performing and understandable to human beings. At some point, the notion of XAI is not going to be optional. In the EU, as an example, the General Data Protection Regulation (GDPR) Article 223 forbids decision-making based solely on automated processing.

For AI systems to be the most useful, they need to engender the appropriate level of trust in users. Users need to be able to tell when the system should be trusted and when it should be questioned. This is even more challenging, and important, when cognitive biases are in play. Another factor to consider are the “stakes” involved – e.g. an algorithm that recommends videos has more margin for error than one that approves loans, suggests medical diagnoses and courses of treatment, or is used in national defense.

To achieve higher levels of trust in AI systems, technologists, data scientists, and data engineers need to embrace the concept of XAI and work on their algorithms so that their output enables UX designers to:

  • communicate transparently about how the algorithms work
  • explain any biases in the data
  • build “scaffolding” to develop user trust in systems and account for cognitive biases
  • support users in making better decisions about when to trust and when to question the output of AI enabled systems

Colleen McCretton is Director, User Experience Design in FCAT

 
References & Disclaimers

1 https://worldusabilityday.org
2 https://www.darpa.mil/program/explainable-artificial-intelligence
3 https://gdpr.eu/article-22-automated-individual-decision-making

960768.1.0

Related posts

Technology & Society, Artificial Intelligence

How AI Can Foster Inclusion

Sarah Hoffman

December 9, 2021

We've spent a lot of time discussing the unintended bias that can easily creep into AI algorithms. But the same technology, properly designed and trained, can also be used to confront biases. A new generation of automated tools seeks to proactively promote inclusion in:

Artificial Intelligence, Future Computing

Iterative Combinatorial Brain Surgeon: Scalable Pruning of Large Language and Vision Models (LLVMs)

Elton Zhu

November 26, 2024

FCAT collaborated with Amazon Quantum Solutions Lab to propose a new scalable pruning algorithm for large language and vision models.

Technology & Society, Artificial Intelligence

Eliminating AI Bias: A Human + Machine Approach

Sarah Hoffman

August 27, 2020

Bias in AI is a known problem. Cases involving medical care, parole, recruiting, and loans have all been tainted by flawed data sampling or training data that includes biased human decisions.1 The good news: large organizations are waking up. Even the Vatican has chimed in with a charter on AI ethics.2 Even better news: there are practical methods for combatting AI bias.