Skip to content

Artificial Intelligence

January 17, 2025

A Conversation with Sayash Kapoor: Author of AI Snake Oil

John Dalton

All Posts
Thought leader Sayash Kapoor joined FCAT to offer his perspective on bogus AI claims, why people fall for them, and strategies for better leveraging AI technologies. FCAT’s John Dalton spoke with Kapoor to uncover how users can cut through the hype and tap into AI’s true potential.

In his most recent book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, Sayash Kapoor and his co-author Arvind Narayanan give readers a clear-eyed explanation of why AI fails and why people keep falling for bogus claims and misleading hype. But this isn’t an anti-AI screed — on the contrary, Kapoor and Narayanan also share why they believe that more novel and generative forms of AI might unlock true utility.

FCAT’s VP of Research John Dalton had a chance to catch up with Kapoor about the state of AI and delve into the risks and possibilities surrounding these tools.

Q: First, Sayash, I’d like to thank you and Arvind for writing this book and for the blog that preceded it. Your work continues to provide some of the most sober and sane thinking about artificial intelligence that I’ve encountered. What do you think is the biggest misunderstanding people have about AI?

I think the biggest confusion stems from the fact that AI is an umbrella term that refers to a set of interrelated but completely distinct technologies. While some types of AI have made massive progress in the last few years, other types — like AI used for making predictions about people's futures — have not really made too much progress at all. It doesn’t help that there’s not really an overarching definition of what people mean when they use the term “AI.”

How do you define it?

In our work, we found that there are three loose criteria, which are not necessarily independent or exhaustive, but they give you a flavor of what we mean when we say “AI.”

The first is a technology that automates a task that requires creative effort or training for humans. So, for example, in the last few years we've seen many text-to-image tools. These are models that can generate images using prompted descriptions, which would typically require a lot of creative effort from a human artist. Those tools could be considered AI.

The second criterion is that the behavior of these tools is not directly specified in the code by the developer. For example, consider a thermostat that uses your past preferences and learns how you’ve previously set the temperature to automatically determine the setting you find most comfortable. That’s also AI.

The last criterion is that there should be some flexibility of inputs. If the tool only works when recognizing cats or dogs because it has already seen them within the content used to train it, that’s not AI. However, if it works well (perhaps not perfectly) on new images of cats and dogs, then that’s AI.

I really love the title of your book and blog. As you know, in order for there to be snake oil, there’s got to be a buyer. In the book, you do a brilliant job of explaining why predictive AI is especially prone to overpromising and underdelivering — but we still fall for it. Why is that?

The fact that AI is an umbrella term for all of these different technologies causes a lot of confusion. We talk a lot in the book about social media algorithms, robotics, and self-driving cars — even robotic vacuums. Vendors and the media often conflate these applications with the advances we’ve seen within generative AI.

But I think it’s also important to look at what prompts the demand for AI snake oil. We have a whole section in the book on how AI appeals to broken institutions. For instance, if you look at hiring automation, these tools can be so appealing because you have a hiring manager who may have to sift through hundreds or even thousands of resumes for a single or small number of jobs. When you're in that position, turning to a tool that claims to authentically and appropriately provide an objective ranking of the top 10 candidates seems extremely alluring. As long as you have resource-constrained institutions, they will turn to either AI snake oil or some other “magic bullet” to solve their problems.

Just to be clear, you’re not saying that all AI is snake oil. Predictive AI has a lot of problems, but is there a bright spot?

We’ve seen legitimate technical advances with generative AI. I think GenAI, when looked at more broadly, has the potential to impact the lives of all knowledge workers — largely speaking, everyone who thinks for a living. I think this trend will only continue to grow with time as we figure out the appropriate use cases.

We didn't write this book because we think all AI is snake oil. On the contrary, we wanted to give people a way to distinguish snake oil from the tools making rapid and genuine progress, helping them to ignore the former and tap into the latter.

References & Disclaimer

John Dalton is VP of Research at FCAT, where he studies emerging interfaces (augmented reality, virtual reality, speech, gesture, biometrics), socioeconomic trends, and deep technologies like synthetic biology and robotics.

References & Disclaimers

1181236.1.0

Related posts

Technology & Society, Artificial Intelligence

Can a Machine Be Moral? A Q&A with Jean-Francois Bonnefon

Sarah Hoffman

April 13, 2022

FCAT recently hosted a presentation by psychologist and author Jean-Francois (JF) Bonnefon on his latest book, “The Car That Knew Too Much”. The book discusses a groundbreaking experiment, the Moral Machine, that allowed millions of people from over 200 countries and territories to make choices about life-and-death dilemmas posed by driverless cars. Should they sacrifice passengers for pedestrians? Save children rather than adults? Kill one person so many can live? Following his presentation, FCAT’s Sarah Hoffman caught up with JF to ask a few additional questions about this largest experiment in moral psychology.

Technology & Society, Fintech

How We Do Research

John Dalton

February 4, 2021

FCAT Research is a bet on the value of outside-in thinking, a rigorous analysis of external trends that we believe will have a dramatic impact on our business, our industry, and on the lives of our customers and associates. Our goal is to help Fidelity associates reimagine the future of our business. So where do we look for the most promising trends?

Technology & Society

Q&A with Dr. Nicholas Christakis

FCAT RESEARCH

November 13, 2020

FCAT hosts an ongoing Speaker Series as part of its mission to “bring the outside in” and share diverse perspectives with Fidelity associates that provoke conversation. A few weeks ago, as part of that series, the FCAT Research team hosted a presentation by Dr. Nicolas Christakis, author of Apollo’s Arrow – The Profound and Enduring Impact of Coronavirus on The Way We Live. Dr. Christakis is a physician and sociologist who explores the ancient origins and modern implications of human nature. He directs the Human Nature Lab at Yale University, where he is the Sterling Professor of Social and Natural Science in the Departments of Sociology, Medicine, Ecology and Evolutionary Biology, Statistics and Data Science, and Biomedical Engineering. He is the co-director of the Yale Institute for Network Science, the coauthor of Connected, and the author of the New York Times best seller Blueprint. After his presentation, we had a chance to ask Dr. Christakis a few more questions prompted by his talk, his book, and his work.