Exploring the Critical Intersection between the Humanities and Artificial Intelligence
Artificial intelligence has infiltrated our daily lives—in the ways we conduct business, govern, provide healthcare and security, and communicate. The large-scale cultural and societal implications of these changes—and the ethical questions they raise—pose a serious challenge as we embrace a future increasingly shaped by the implementation of artificial intelligence technology.
The In Our Image conference (held April 7–22, 2021) examined issues surrounding the integration of artificial intelligence through a series of virtual events—presentations, conversations, webinars, film screenings, and an art exhibition—highlighting perspectives from leading humanists, scientists, engineers, artists, writers, and software company executives collectively advancing inquiry into key emerging questions. On this page, you can access video recordings of the conference sessions, a list of readings suggested by conference presenters, an AI teaching guide, podcast episodes, and other resources inspired by and drawing on the conference.
Session Recordings
Teaching Artificial Intelligence: Exploring New Frontiers for Learning
- Andy Mink, Moderator, Vice President for Education Programs, National Humanities Center
- Michelle Zimmerman, Executive Director, Renton Prep
This webinar features perspectives from educators and industry experts on how they are using artificial intelligence; approaches to teaching about artificial intelligence, including design thinking, project-based learning and interdisciplinary connections; tools for exploring artificial intelligence with your students; and activities to introduce artificial intelligence concepts. Most importantly, this session considers how the humanities can provide a critical lens for using artificial intelligence in the classroom.
Can Artificial Intelligence Create, and What Is the Role of the Artist?
- Marian Mazzone, Moderator, Professor of Fine Arts, College of Charleston
- Ahmed Elgammal, Professor of Computer Science, Founder and Director of the Art and Artificial Intelligence Laboratory, Rutgers University
- Carla Gannis, Interdisciplinary artist and educator, NYU Tandon School of Engineering
With the prevalence of artificial intelligence in our daily lives, it’s natural to ask, “What will be the future of art in an AI-driven society?” This question becomes even more relevant as AI increasingly appears in the creative domain. Across human history, artists have always integrated new technologies into their practice—from oil paint and printmaking in the Renaissance to photography, motion pictures, and computer animation in the modern era. In this panel discussion, artists Ahmed Elgammal and Carla Gannis talk about their work, created with AI technologies, and how their relationships with AI inform their creative processes.
Regressing to Eugenics? Technologies and Histories of Recognition
- Wendy Chun, Canada 150 Research Chair in New Media, Simon Fraser University
In her keynote address, Wendy Chun discusses how artificial intelligence reproduces and exacerbates ideologies about identity and contributes to the increasingly fractious politics of the twenty-first century. A leading thinker on the influence of new technologies, Chun is the Canada 150 Research Chair in New Media at Simon Fraser University. She also leads the university’s Digital Democracies Institute, whose purpose is to integrate research in the humanities and data sciences to address questions of equality and social justice in order to combat the proliferation of online “echo chambers,” abusive language, discriminatory algorithms and mis/disinformation.
How Has Artificial Intelligence Challenged the Boundaries of Humanistic Thinking?
- Matthew Booker, Moderator, Vice President for Scholarly Programs, National Humanities Center
- Wendy Chun, Canada 150 Research Chair in New Media, Simon Fraser University
- Hsien-hao Sebastian Liao, Dean, Institute for Advanced Studies for Humanities and Social Sciences, National Taiwan University
- Safiya Umoja Noble, Associate Professor, Departments of Information Studies and African American Studies, UCLA
- Walter Sinnott-Armstrong, Chauncey Stillman Professor of Practical Ethics, Kenan Institute, Duke University
Can AI have emotions, can machine learning models truly learn? Can AI systems be used to improve human moral judgments? How might collaboration between humanists and technologists produce more rigorous forms of learning and verification? These and other questions are the subject of a lively exchange between panelists Wendy Chun, Sebastian Liao, Safiya Umoja Noble, and Walter Sinnott-Armstrong.
Demonstration of IPsoft’s Amelia
- Chetan Dube, CEO, IPsoft
Chetan Dube envisions a world where humans and machines work closely together to build a radically more efficient planet. His research has focused on deep AI, and he pioneered the use of AI-enabled digital labor across industries. Amelia’s brain uses episodic memory, process memory, intent recognition, and emotional intelligence to respond to complex queries, process transactions, and deliver personalized customer service. Amelia stores facts, concepts, and the associations between them in her semantic memory. From standard operating procedures (SOPs) to policy documents, she can be trained to apply them to conversations.
Can Morality Be Built into Computers?
- Robert D. Newman, Moderator, President and Director, National Humanities Center
- Meredith Broussard, Associate Professor of Data Journalism, New York University
- Chetan Dube, CEO, IPsoft
- David Theo Goldberg, Director, University of California Humanities Research Institute
- Elizabeth Langland, Director, Lincoln Center for Applied Ethics, Arizona State University
Do we believe digital employees will become indistinguishable from human employees this decade? As democratization of AI leads to proliferation of such digital agents, how should we prepare for humans to continue to be in command? When questioning if morality can be built into computers, we must simultaneously ask: whose morality? Could there be a successful deep learning AI that answers moral dilemmas? Or is there reason to think that matters are different in the case of morality?
In Whose Image? Envisioning an Inclusive and Vibrant AI Future
- Wesley Hogan, Moderator, Director, Center for Documentary Studies at Duke University
- Natalie Bullock Brown, Assistant Teaching Professor, Department of Interdisciplinary Studies, North Carolina State University
- Marsha Gordon, Professor of Film Studies, North Carolina State University
- Shalini Kantayya, Film Director and Producer, Coded Bias
If human beings are creating artificial intelligence that influences the future, who determines which of us get to imagine that future? Whose voices will be heard, and whose imagination and vision will be realized? Films discussed include The Black Baptism, Coded Bias, Dirty Computer, and Her.
How Do We Address Privacy in the World of Artificial Intelligence?
- Matthew Booker, Moderator, Vice President for Scholarly Programs, National Humanities Center
- Nita A. Farahany, Robinson O. Everett Professor of Law & Philosophy, Founding Director of Duke Science & Society, Chair of the Duke MA in Bioethics & Science Policy, Duke University
- Sarah E. Igo, Andrew Jackson Professor of History, Professor of Law, Professor of Political Science, Professor of Sociology, Vanderbilt University
- Dr. Louis J. Muglia, President, Burroughs Wellcome Fund
Artificial intelligence has transformed what we can learn and decipher from the brain. Are we mistaken to refer to our personal information as “ours” or to claim individual privacy rights to those multifarious details being scooped up by data miners and aggregators? Might there be better, more apt ways to think about individual privacy and personal information—perhaps as collective or public goods? What level of privacy risk is acceptable in trying to use health care data for discovery in the framework of de-identified yet still potentially discoverable information? What is the actual risk from “discoverability”—individual re-identification from de-identified sources?
Special Session: Shakespeare in High Dimensional Data Spaces
- Michael Witmore, Director, Folger Shakespeare Library
Shakespeare’s enduring influence, his facility with language, and his empathic depiction of the human experience contribute not only to our sense of his genius, but make him an irresistible subject for artificial intelligence researchers.
Where Do We Go from Here? The Future of Artificial Intelligence and the Humanities
- Robert D. Newman, Moderator, President and Director, National Humanities Center
- Paul Alivisatos, Provost and Executive Vice Chancellor, Samsung Distinguished Professor of Nanoscience and Nanotechnology, University of California, Berkeley
- Tobias Rees, Reid Hoffman Professor at The New School of Social Research and Director of the Berggruen Institute
- Abby Smith Rumsey, Center for Advanced Study in the Behavioral Sciences
- Şerife (Sherry) Wong, Artist, Icarus Salon and Researcher at the Berggruen Institute
Artificial intelligence allows us to experience and compare many different methods of making sense of the world. How can universities support this kind of multiplication and polyvalence in relation to the humanities and AI? Is the “human” we in the humanities defend against the machine actually defensible? And is the image of the machine we uphold as the non-human actually reflecting the kinds of machines AI engineers are building today? If human intelligence is by definition always embodied, what does this mean for artificial intelligence and the promise or fear that it will serve (the promise) or replace (the fear) human ends? Is there something unique about artificial intelligence that makes it different from how other technologies have impacted humans?
Podcasts
Nerds in the Woods
In two special episodes of the Center’s podcast series, Nerds in the Woods, we revisit the “In Our Image” conference and consider the challenges of a world increasingly shaped by artificial intelligence from a humanistic perspective.
Building a Moral Machine
The AI Future is Now
Graduate Student Podcasts
As an extension of the Center’s Public Humanities Institutes for PhD students, fourteen recent institute participants were invited to attend the “In Our Image” conference and to develop podcast episodes based on the proceedings.
How Has Artificial Intelligence Challenged the Boundaries of Humanistic Thinking?
- Laken Brooks, English, University of Florida
- Kristina Horn, East Asian Studies, University of California, Irvine
- June Ke, Comparative Literature, University of California, Irvine
- Kimba Stahler, History, Case Western Reserve University
Can Morality Be Built into Computers?
- Nuala Caomhanach, History, New York University
- Grace East, Anthropology, University of Virginia
- JiMin Kwon, Philosophy, University of California, San Diego
- Madelaine MacQueen, Music, Case Western Reserve University
How Do We Address Privacy in the World of Artificial Intelligence?
- Stephen Betts, Religious Studies, University of Virginia
- Megan Cole, English, University of California, Irvine
- Zach Wooten, Leadership Studies, Alvernia University
Where Do We Go from Here? The Future of Artificial Intelligence and the Humanities
- Lauren Cox, Film and Media Studies, University of Florida
- Clio Doyle, English and Renaissance Studies, Yale University
- Joanna Lawson, Philosophy, Yale University
Additional Research and Teaching Resources
In Our Image: Resources for Teaching Artificial Intelligence and the Humanities
In this document, we have gathered readings, provocative questions, and recorded panel discussions for use by teachers and students interested in artificial intelligence in its human context. The National Humanities Center offers these materials as open educational resources in the hope that they can be useful beginnings for classroom conversations.
Conference Bibliography
This bibliography was compiled by the conference organizers and presenters to both inform and prompt further study and inquiry.
How Has Artificial Intelligence Challenged the Boundaries of Humanistic Thinking?
- Wendy Chun, “Red Pill Toxicity, or Liberation Envy,” Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition (The MIT Press, forthcoming)
- Steven Shaviro, “Actual Entities and Eternal Objects,” Without Criteria: Kant, Deleuze, and Aesthetics (The MIT Press, 2009)
- Joshua August Skorburg, Walter Sinnott-Armstrong, and Vincent Conitzer, “AI Methods in Bioethics,” AJOB Empirical Bioethics 11, no. 1 (2020): 37–39
Can Morality Be Built into Computers?
- Tom Bawden, “Scientists Create the World’s First ‘Empathetic’ Robot,” iNews UK, January 11, 2021
- Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (Wiley, 2019)
- Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World (The MIT Press, 2018)
- Meredith Broussard, “When Algorithms Give Real Students Imaginary Grades,” The New York Times, September 8, 2020
- David Theo Goldberg, “Coding Time,” Critical Times 2, no. 3 (2019): 353–69
- Mar Hicks, Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing (The MIT Press, 2017)
- Charlton McIlwain, Black Software: The Internet and Racial Justice, from the AfroNet to Black Lives Matter (Oxford University Press, 2019)
- Safiya Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York University Press, 2018)
How Do We Address Privacy in the World of Artificial Intelligence?
- Ross Andersen, “The Panopticon is Already Here,” The Atlantic, September 2020
- Nina Farahany, “The Costs of Changing Our Minds,” Emory Law Journal 69, no. 1 (2019): 75–110
- Sarah E. Igo, “Me and My Data,” Historical Studies in the Natural Sciences 48, no. 5 Special Issue on Histories of Data and the Database (December 2018): 616–26
- Daniel Rosenberg, “Whence ‘Data’?,” Berlin Journal 28 (Spring 2015): 18–22
- Chandra Thapa, and Seyit Camtepe, “Precision Health Data: Requirements, Challenges and Existing Techniques for Data Security and Privacy,” Computers in Biology and Medicine, 129 (2021)
Where Do We Go from Here? The Future of Artificial Intelligence and the Humanities
- James Abello, et al., “Culture Analytics: An Introduction” (white paper published by the Institute for Pure and Applied Mathematics, University of California, Los Angeles 2016)
- Antonio Damasio, “A Passion for Reason,” Descartes’ Error: Emotion, Reason, and the Human Brain (Harper Perennial, 1995)
- Tobias Rees, “Machine/Intelligence: On the Philosophical Stakes of AI Today,” Beyond the Uncanny Valley: Being Human in the Age of AI (Fine Arts Museums of San Francisco, 2020)
- Şerife (Sherry) Wong, “AI Justice: When AI Principles Are Not Enough,” Medium, August 5, 2019