• Search
  • Lost Password?

The Environment for Ethical Action

T.L. Taylor

A conversation with CMS/W Professor T.L. Taylor, part of the series “Ethics, Computing, and AI | Perspectives from MIT” produced by the School of Humanities, Arts, and Social Sciences.

Q: What opportunities do you see for sociology to inform our thinking about the benefits, risks, and societal/ethical implications of AI?

Each spring I teach a course named CMS.614/Network Cultures. In it, we read four or five books that tackle various aspects of what might be broadly thought of as internet and society issues. We’ve read works that explore Google’s growing role in our everyday lives (Vaidhyanathan, 2012), youth and social media (boyd, 2015), content moderation on platforms (Gillespie, 2018), the role of algorithms in perpetuating stereotypes and racism (Noble, 2018), and even a consideration of networked life in Ghana (Burrell, 2012).

Scholars focused on the critical study of the internet and digital platforms have been doing vital work documenting the entanglement of the social and technological in these systems. This means not simply the ways technology impacts society but also how such innovations are always woven through with complex human action, including the work of a variety of people embedded in companies (Seaver, 2018) as well as the often hidden labor of piecemeal workers who augment algorithmic and platform data (Gray and Suri, forthcoming).

Our students understand the stakes

Most of the students in my classes are majoring in engineering and science fields, but all MIT undergraduates, whatever their majors, take classes in the humanities, arts, and social sciences, and being exposed to the work of scholars like the ones above often proves to be an eye-opener for them. Our students see the stakes and understand in a real way that systems can indeed produce harm. While they are often excited by the promises new technologies make, they are also very open to understanding how socio-technical systems can impact society and everyday life — often profoundly.

What they also regularly say, however, is they don’t yet have ways to even imagine addressing these issues or thinking about them amid the technical work they do. So, there is a gap between their training for their future professional lives and what they recognize, critically, with even just a bit of prompting from social science scholarship. That these students are often exactly the people who will go on to work at companies producing the next iteration of developments in AI, algorithmic systems, platform structures, and big data projects is particularly devastating.

A vital step in coding: thinking through implications

This is not simply an issue of teaching ethics. Two insufficient models often drive our pedagogical activities vis-à-vis a critical engagement with technology. On the one hand, all too often the social sciences and humanities are seen as adjunct domains that simply provide an extra layer; students are asked to read some classics in ethics or reflect on a handful of cases. Alternatively, stand-alone “ethics in domain X” classes get offered to try to fill in the gap. While such moves are well-intentioned, each is rooted in a flawed model: one that assumes a dash of ethics can quickly cultivate a reflective thinker and, by hopeful extension, an ethical technical practitioner.

Dr. Casey Fiesler, who studies technology and research ethics, has written on the harm that comes from “siloing” in these ways and argued that such models typically assume ethics is a specialization area or even something outside the domain of core technical competency. She asks instead, “What if we taught students when they first learned to write code or build technologies that a fundamental component is thinking through the implications — and that if you don’t do that, you’ve missed a vital step, an error just as damaging as not learning to test or debug your code.”



Individual ethics and good intentions are not enough

I want to link up this valuable point with another truth, one deeply evident to me as a sociologist. There are limits to individualistic models of critical engagement. We can cultivate our students as ethical thinkers, but if they aren’t working in (or studying in) structures that support advocacy, interventions, and pushing back on proposed processes, they will be stymied.



Enlist insights from scholars who study human systems, processes, life contexts

This is a key intervention because it moves the conversation beyond simply teaching individuals ethics. While a critical component to what we should be doing, it is not sufficient. How might we include attention to structure and policy in our conversations with students? What might it look like to teach modes of accountability and transparency that operate at the individual and organizational level? To get students and researchers not just to “involve” communities who will be impacted by their work, but to give external stakeholders real power?How might we include these orientations in the very structure of the College of Computing? What might our new college look like if, at its heart, was a commitment to social justice?

Dr. Mary Gray, an anthropologist and researcher at Microsoft Research New England, has been hard at work trying to build these bridge conversations, and her efforts are instructive here. Working closely with computer scientists, she and others are creating processes that keep a fundamental truth visible throughout the chain: The systems, experiments, and models getting built and enacted into platforms are fundamentally tied to humans. Processes of consent, accountability, and transparency from fields like anthropology and sociology have much to offer as we trek along these new paths.

Socio-technical systems and diverse stakeholders

This is aligned, I believe, with what the AI Now Institute calls for in discussing accountability across the “full stack supply chain.” This includes “training data, test data, models, application program interfaces (APIs), and other infrastructural components over a product life cycle.” It also syncs well with a call for fuller critical ethical engagement throughout a curriculum.

This work can’t be developed or implemented only by technologists. It will require skills, expertise, and insight from all corners of the Institute. At its most basic level, it requires domain specialists and those with actual social science training — scholars who have invested many years in working with people and everyday life contexts — to provide insights into processes, variables, or confluences that actually produce bias or harm.

This also means training more social scientists and incentivizing expertise beyond the strictly technical. It requires the skills of those who think about all of these developments as fundamentally socio-technical systems — ones subject to broader collective reflection and oversight. And at its most honest, this work requires a diverse set of stakeholders beyond technologists to hold enough structural power to even propose, when warranted, that a particular technology not be developed.

Series

Ethics, Computing and AI | Persectives from MIT

T.L. Taylor: Website
Comparative Media Studies/Writing Program

Stories

Inside the world of livestreaming as entertainment
Taylor looks at how computer gaming and other forms of online broadcasting became big-time spectator sports.

3Q: T.L. Taylor on diversity in e-sports
MIT sociologist’s “AnyKey” initiative aims to level the playing field of online sports.

Profile: Big game hunter
MIT sociologist T.L. Taylor studies the subcultures of online gaming and the nascent world of online e-sports.

References

AI Now Institute. 2018. AI Now Report. Available at https://ainowinstitute.org/AI_Now_2018_Report.pdf.

boyd, danah. 2015. It’s Complicated: The Social Lives of Networked Teens. New Haven, CT: Yale University Press.

Burrell, Jenna. 2012. Invisible Users: Youth in the Internet Cafés of Urban Ghana. Cambridge, MA: The MIT Press.

Fiesler, Casey. 2018. “What Our Tech Ethics Crisis Says About the State of Computer Science Education.” Next, December 5. Available at https://howwegettonext.com/what-our-tech-ethics-crisis-says-about-the-state-of-computer-science-education-a6a5544e1da6.

Gillespie, Tarleton. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. New Haven, CT: Yale University Press.

Gray, Mary and Siddharth Suri. Forthcoming. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. New York, NY: Eamon Dolan/Houghton Mifflin Harcourt.

Gray, Mary. 2017. “Big Data, Ethical Futures.” Anthropology News, January 13. Available at https://anthrosource-onlinelibrary-wiley-com.ezproxyberklee.flo.org/doi/epdf/10.1111/AN.287.

Microsoft Research. 2014. Faculty Summit Ethics Panel Recap. Available at https://marylgray.org/2014/08/msr-faculty-summit-2014-ethics-panel-recap/

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York, NY: New York University Press.

Seaver, Nick. 2018. “Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems.” Big Data & Society July–December: 1–12.

Vaidhyanathan, Siva. 2012. The Googlization of Everything (And Why We Should Worry). Berkeley, CA: The University of California Press.


Ethics, Computing and AI series prepared by MIT SHASS Communications
Office of the Dean, MIT School of Humanities, Arts, and Social Sciences
Series Editors: Emily Hiestand and Kathryn O’Neill
Published 18 February 2019

T.L. Taylor
Written by
T.L. Taylor

T.L. Taylor is a qualitative sociologist who has focused on the interrelations between culture and technology in online environments for over thirty years. Her work sits at the intersection of sociology, critical internet and game studies, and science and technology studies. She is the author of three books on gaming as well as co-author of a handbook on ethnographic methods. In addition to her academic work, she co-founded the non-profit AnyKey and served as its director of research, then advisory committee chair, from 2015-2021. She was also a founding member of Twitch’s Safety Advisory Council and served on it from 2020-2024. She has been visiting researcher at Microsoft Research New England and is regularly sought out for industry consultations. She teaches subjects that include critical internet studies, qualitative methods, and gaming. She is also currently the director of the MIT Game Lab.

Avatar
Written by
Emily Hiestand
Avatar
Written by
Kathryn O'Neill
T.L. Taylor Written by T.L. Taylor
Avatar Written by Emily Hiestand
Avatar Written by Kathryn O'Neill