Skip to content

August 18, 2022

Artificial Intelligence

Human Centered AI: Q & A with Ben Shneiderman

John Dalton

All Posts

The remarkable progress in algorithms for machine and deep learning have opened the doors to new opportunities, and some dark possibilities. However, a bright future awaits those who build on their working methods by including human-centered AI strategies of design and testing. As many technology companies and thought leaders have argued, the goal is not to replace people but to empower them by making design choices that give humans control over technology. FCAT recently hosted University of Maryland Professor Ben Shneiderman as a guest of our Speaker Series. Shneiderman is a trailblazer in the field of human-computer interaction. He is credited with pioneering the use of clickable highlighted weblinks, high-precision touchscreen keyboards for mobile, devices, tagging for photos, and more. FCAT’s John Dalton was able to catch up with the professor for a brief Q&A.

JOHN DALTON: Your new book, Human-Centered AI, is the most balanced, pragmatic and optimistic analysis of artificial intelligence that I’ve read. You lay out a comprehensive guide to building reliable, safe, and trustworthy applications that feature both high levels of human control and high levels of automation. A critical part of your argument is that if we want to achieve a flourishing and humane future it’s essential for us to understand that computers are not in fact people, and vice versa. Why is clarifying the difference between humans and computer so important?

BEN SHNEIDERMAN: Some advocates of artificial intelligence promote the goal of human-like computers that match or exceed the full range of human abilities from thinking to consciousness. This vision attracts journalists who are eager to write about humanoid robots and contests between humans and computers. I consider these scenarios as misleading and counterproductive, diverting resources and effort from meaningful projects that amplify, augment, empower, and enhance human performance.

I respect and value the remarkable capabilities that humans have for individual insight, team coordination, and community building. I seek to build technologies that support human self-efficacy, creativity, responsibility, and social connectedness.

JOHN DALTON: We’re awash in news about automation that fails, involving everything from biased school admissions and credit applications to autonomous vehicles that kill. Even Boeing ran into challenges recently with the 737 MAX. Civil aviation has some of the most robust safety measures and standards in place. What can even those of us outside of the airline industry learn from tragedies like that?

BEN SHNEIDERMAN: The two Boeing 737 MAX crashes are a complex story, but one important aspect was the designers’ belief that they could create a fully autonomous system that was so reliable that the pilots were not even informed of its presence or activation. There was no obvious visual display to inform the pilots of the status, nor was there a control panel that would guide them to turn off the autonomous system. The lesson is that the excessive belief in machine autonomy can lead to deadly outcomes. When rapid performance is needed, high levels of automation are appropriate, but so are high levels of human independent oversight to track performance over the long-term and investigate failures.

JOHN DALTON: Your vision for the future is one in which AI systems augment, amplify and enhance our lives. Are there products and services out there today that you believe already do this?

BEN SHNEIDERMAN: Yes, the hugely successful digital cameras rely on high levels of AI for setting the focus, shutter speed, and color balance, while giving users control over the composition, zoom, and decisive moment when they take the photo. Similarly, navigation systems let users set the departure and destination, transportation mode, and departure time, then the AI algorithms provide recommended routes for users to select from as well as the capacity to change routes and destinations at will. Query completion, text auto-completion, spelling checkers, and grammar checkers all ensure human control while providing algorithmic support in graceful ways.

JOHN DALTON: As you point out in your book, there’s a lot of work to do before our design metaphors and governance structures support truly human-centered AI. What can we do to accelerate the adoption of HCAI?

BEN SHNEIDERMAN: Yes, it will take a long time to produce the changes that I envision, but our collective goals should be to reduce the time from 50 to 15 years. We can all begin by changing the terms and metaphors we use. Fresh sets of guidelines for writing about AI are emerging from several sources, but here is my draft offering:

  1. Clarify human initiative and control
  2. Give people credit for accomplishments
  3. Emphasize that computers are different from people
  4. Remember that people use technology to accomplish goals
  5. Recognize that human-like physical robots may be misleading
  6. Avoid using human verbs to describe computers
  7. Be aware that metaphors matter
  8. Clarify that people are responsible for use of technology

Another step will be revising the images of future technologies to replace humanoid robots with devices that are more like cars, elevators, thermostats, phones, and cameras.

John Dalton is VP Research in FCAT, where he investigates socioeconomic trends and engages in in-depth studies focused on emerging interfaces (augmented reality, virtual reality, speech, gesture, and biometrics).

References & Disclaimers

1039171.1.0

Related posts

Artificial Intelligence

A Conversation with Ethan Mollick: Author of Co-Intelligence: Living and Working with AI

John Dalton

June 6, 2024

In his most recent book, Co-Intelligence: Living and Working with AI, Ethan Mollick, professor of management at Wharton and author of the One Useful Thing newsletter, explains how this rapidly changing technology will impact our personal and professional lives. John Dalton, VP of Research at FCAT, spoke with Ethan about the emergence of this “alien intelligence.”

Technology & Society

Curious Minds: A Conversation with Dani S. Bassett and Perry Zurn

John Dalton

December 11, 2023

In their most recent book, Curious Minds: The Power of Connection, Dani S. Bassett, J. Peter Skirkanich Professor of Bioengineering and Physics at the University of Pennsylvania, and Perry Zurn, Provost Associate Professor of Philosophy at American University, explore how curiosity works and how we can improve the practice of curiosity in our own lives. Recently, John Dalton, VP of Research at FCAT, had a chance to speak with Dani and Perry about their research.

Design

Humanity-Centered Design: An Interview with Don Norman

John Dalton

February 14, 2023

FCAT had the pleasure of welcoming Don Norman, author of Design for a Better World, for a speaking event where he presented an eye-opening diagnosis of how human behavior has led to numerous societal crises from collapsing social structures to climate change. Norman, both a scientist and business executive, proposes how we can reconsider what’s important in life and how that new way of thinking can help save humanity. As a sneak peek to his new book, FCAT’s VP of Research, John Dalton interviewed Norman to dive into his philosophy of humanity-centered design.