Skip to content

January 17, 2025

Artificial Intelligence

A Conversation with Sayash Kapoor: Author of AI Snake Oil

John Dalton

All Posts

Thought leader Sayash Kapoor joined FCAT to offer his perspective on bogus AI claims, why people fall for them, and strategies for better leveraging AI technologies. FCAT’s John Dalton spoke with Kapoor to uncover how users can cut through the hype and tap into AI’s true potential.

In his most recent book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, Sayash Kapoor and his co-author Arvind Narayanan give readers a clear-eyed explanation of why AI fails and why people keep falling for bogus claims and misleading hype. But this isn’t an anti-AI screed — on the contrary, Kapoor and Narayanan also share why they believe that more novel and generative forms of AI might unlock true utility.

FCAT’s VP of Research John Dalton had a chance to catch up with Kapoor about the state of AI and delve into the risks and possibilities surrounding these tools.

Q: First, Sayash, I’d like to thank you and Arvind for writing this book and for the blog that preceded it. Your work continues to provide some of the most sober and sane thinking about artificial intelligence that I’ve encountered. What do you think is the biggest misunderstanding people have about AI?

I think the biggest confusion stems from the fact that AI is an umbrella term that refers to a set of interrelated but completely distinct technologies. While some types of AI have made massive progress in the last few years, other types — like AI used for making predictions about people's futures — have not really made too much progress at all. It doesn’t help that there’s not really an overarching definition of what people mean when they use the term “AI.”

How do you define it?

In our work, we found that there are three loose criteria, which are not necessarily independent or exhaustive, but they give you a flavor of what we mean when we say “AI.”

The first is a technology that automates a task that requires creative effort or training for humans. So, for example, in the last few years we've seen many text-to-image tools. These are models that can generate images using prompted descriptions, which would typically require a lot of creative effort from a human artist. Those tools could be considered AI.

The second criterion is that the behavior of these tools is not directly specified in the code by the developer. For example, consider a thermostat that uses your past preferences and learns how you’ve previously set the temperature to automatically determine the setting you find most comfortable. That’s also AI.

The last criterion is that there should be some flexibility of inputs. If the tool only works when recognizing cats or dogs because it has already seen them within the content used to train it, that’s not AI. However, if it works well (perhaps not perfectly) on new images of cats and dogs, then that’s AI.

I really love the title of your book and blog. As you know, in order for there to be snake oil, there’s got to be a buyer. In the book, you do a brilliant job of explaining why predictive AI is especially prone to overpromising and underdelivering — but we still fall for it. Why is that?

The fact that AI is an umbrella term for all of these different technologies causes a lot of confusion. We talk a lot in the book about social media algorithms, robotics, and self-driving cars — even robotic vacuums. Vendors and the media often conflate these applications with the advances we’ve seen within generative AI.

But I think it’s also important to look at what prompts the demand for AI snake oil. We have a whole section in the book on how AI appeals to broken institutions. For instance, if you look at hiring automation, these tools can be so appealing because you have a hiring manager who may have to sift through hundreds or even thousands of resumes for a single or small number of jobs. When you're in that position, turning to a tool that claims to authentically and appropriately provide an objective ranking of the top 10 candidates seems extremely alluring. As long as you have resource-constrained institutions, they will turn to either AI snake oil or some other “magic bullet” to solve their problems.

Just to be clear, you’re not saying that all AI is snake oil. Predictive AI has a lot of problems, but is there a bright spot?

We’ve seen legitimate technical advances with generative AI. I think GenAI, when looked at more broadly, has the potential to impact the lives of all knowledge workers — largely speaking, everyone who thinks for a living. I think this trend will only continue to grow with time as we figure out the appropriate use cases.

We didn't write this book because we think all AI is snake oil. On the contrary, we wanted to give people a way to distinguish snake oil from the tools making rapid and genuine progress, helping them to ignore the former and tap into the latter.

References & Disclaimer

John Dalton is VP of Research at FCAT, where he studies emerging interfaces (augmented reality, virtual reality, speech, gesture, biometrics), socioeconomic trends, and deep technologies like synthetic biology and robotics.

References & Disclaimers

1181236.1.0

Related posts

Technology & Society, Fintech

How We Do Research

John Dalton

February 4, 2021

FCAT Research is a bet on the value of outside-in thinking, a rigorous analysis of external trends that we believe will have a dramatic impact on our business, our industry, and on the lives of our customers and associates. Our goal is to help Fidelity associates reimagine the future of our business. So where do we look for the most promising trends?

Artificial Intelligence

Building a Digital Person: Design Best Practices

Michael Musser

August 10, 2023

Digital humans or digital people are emerging as a scalable touchpoint for customers across various markets. After creating FCAT’s own digital brand ambassador, the FCAT AIX team is sharing best practices for designing digital people and their behaviors.

Technology & Society

Life and Work Through the Anthropological Lens: A Q&A with Gillian Tett of the Financial Times

John Dalton

February 3, 2022

FCAT recently hosted a presentation by Gillian Tett, Chair of the Editorial board and Editor-at-Large, US, of the Financial Times. Her recent book, “Anthro-Vision: A New Way to See in Business and Life”, has just been published. Following her presentation, FCAT’s John Dalton caught up with Gillian to ask a few additional questions on how anthropology can be used effectively in navigating business and social trends.