April 13, 2022
Technology & Society, Artificial Intelligence
Can a Machine Be Moral? A Q&A with Jean-Francois Bonnefon
FCAT recently hosted a presentation by psychologist and author Jean-Francois (JF) Bonnefon on his latest book, “The Car That Knew Too Much”. The book discusses a groundbreaking experiment, the Moral Machine, that allowed millions of people from over 200 countries and territories to make choices about life-and-death dilemmas posed by driverless cars. Should they sacrifice passengers for pedestrians? Save children rather than adults? Kill one person so many can live? Following his presentation, FCAT’s Sarah Hoffman caught up with JF to ask a few additional questions about this largest experiment in moral psychology.
Sarah Hoffman: Why do you think the Moral Machine went viral? Why do you think this topic got the attention of so many people?
Jean-François Bonnefon: I think that for many people, the Moral Machine was the first time they realized how challenging moral dilemmas can be. Even people who knew about dilemmas such as 'is it ok to sacrifice one life to save five lives' and thought they had a response to that, realized that they had a hard time making decisions on the complex scenarios offered by the Moral Machine. And the Moral Machine had millions of scenarios to offer, which ensured that no two people saw the same ones. In retrospect, this played a big role in the virality of the Moral Machine, because it made it Youtube-friendly: everybody could post a different reaction video to the Moral Machine, because they all got different scenarios to react to.
SH: Is there anything you wish you had done differently when designing the Moral Machine?
JF: If I were to do it all over again, I would think harder about including a homeless person as one of the potential crash victims. We did this because we believed that, sadly, people would sacrifice the homeless character more; and we wanted to use this as an example of why you cannot simply follow the preferences of the crowd, given that some of these preferences are unethical. But as the Moral Machine went viral, we lost control about the way it was presented in traditional and social media; and in some occasions the presence of the homeless character was perceived as a sign that it was an actual possibility that self-driving cars could be programmed to crash into the poor. I don't like that the Moral Machine may have indirectly contributed to this belief.
SH: What are your next steps for this research?
JF: The crash scenarios in the Moral Machine were intentionally simplified, so that people would not have to absorb too much information about road safety statistics. This approach was instrumental in the success of the website, but a shortcoming was that the scenarios lacked realism. So we have been developing another version of the platform, which uses realistic scenarios and actual traffic statistics. While this data-collection platform is not as simple or engaging as the Moral Machine, its results will be more directly relevant to policymaking.
SH: Do you think there’s anything we can learn from all this beyond self-driving cars that may apply to other technologies or use cases?
JF: Beyond self-driving cars, the data we collected address one huge question, which is whether people put different values on different lives when you cannot save everyone; and as a corollary, which kind of policy they are more likely to accept if governments need to be explicit about the lives they prioritize in a crisis. We have been through this situation at least twice during the pandemic: when ventilators became scarce, many countries had to explain which patients would be prioritized; and when the first vaccines became available, governments had to explain who would be first in line to get a shot. In cases like these, the data of the Moral Machine or comparable studies can help anticipate how citizens will react to high-stake policies decided under close public scrutiny.
Sarah Hoffman leads AI and Machine Learning (ML) research for FCAT, helping the firm understand trends in these technologies and their potential impact on Fidelity.
References & Disclaimers
1022651.1.0
Related posts
Life and Work Through the Anthropological Lens: A Q&A with Gillian Tett of the Financial Times
John Dalton
February 3, 2022
FCAT recently hosted a presentation by Gillian Tett, Chair of the Editorial board and Editor-at-Large, US, of the Financial Times. Her recent book, “Anthro-Vision: A New Way to See in Business and Life”, has just been published. Following her presentation, FCAT’s John Dalton caught up with Gillian to ask a few additional questions on how anthropology can be used effectively in navigating business and social trends.
Q&A with Andrew J. Scott, PhD, Co-author, “The New Long Life”
SOPHIA MOWLANEJAD
January 14, 2021
FCAT hosts an ongoing Speaker Series as part of its mission to “bring the outside in” and share diverse perspectives with Fidelity associates to provoke conversation and action. A few weeks ago, as part of that series, the FCAT Research team hosted a presentation by Lynda Gratton and Andrew J. Scott, authors of “The New Long Life.” Their book suggests that we live in an age where technology has not been matched by the necessary innovation to social structures. After the presentation, FCAT socio-cultural researcher Sophia Mowlanejad caught up with Andrew, a Professor of Economics at London Business School and consulting scholar at Stanford University's Center on Longevity, on steps we can take to improve our financial planning.
Q&A with Dr. Nicholas Christakis
FCAT RESEARCH
November 13, 2020
FCAT hosts an ongoing Speaker Series as part of its mission to “bring the outside in” and share diverse perspectives with Fidelity associates that provoke conversation. A few weeks ago, as part of that series, the FCAT Research team hosted a presentation by Dr. Nicolas Christakis, author of Apollo’s Arrow – The Profound and Enduring Impact of Coronavirus on The Way We Live. Dr. Christakis is a physician and sociologist who explores the ancient origins and modern implications of human nature. He directs the Human Nature Lab at Yale University, where he is the Sterling Professor of Social and Natural Science in the Departments of Sociology, Medicine, Ecology and Evolutionary Biology, Statistics and Data Science, and Biomedical Engineering. He is the co-director of the Yale Institute for Network Science, the coauthor of Connected, and the author of the New York Times best seller Blueprint. After his presentation, we had a chance to ask Dr. Christakis a few more questions prompted by his talk, his book, and his work.