, May 30, 2025

A Conversation with Mairead Sullivan

Interview by Joseph Wakelee-Lynch

Share this story

Mairead Sullivan, who is chair of the Women and Gender Studies program in the LMU Bellarmine College of Liberal Arts, is the principal investigator of a recent Mellon Foundation grant totaling $431,000 to study the ethical and social justice implications of artificial intelligence through the lens of disability studies. The grant is well timed because LMU will launch a disability studies minor in fall 2025.

Why is it important to study the connection between AI and disability issues?

As AI systems begin to replicate or even outperform human cognitive functions, society is confronted with some unsettling questions: What counts as intelligence? Whose ways of thinking and learning or communicating are privileged in the system? These are not just technical questions. They are deeply philosophical and ethical questions about what it means to be human that have long been at the heart of disability studies.

Is AI technology raising again the issue of mainstream society defining what is human but not in a way that’s accepting or fully inclusive of people with disabilities? 

We know that AI is replicating a number of systems, norms, and ways of thinking that are embedded in our culture, and in that way replicates inequalities on the basis of disability, race, gender, etc. That’s a very important aspect of this project.

Is the goal of the project to answer a particular question about AI and disabilities?

We’re not trying to answer a question, we’re trying to shape a world. In my vision, that is the unique role of both the humanities and a Jesuit-Marymount education. Oddly enough, the humanities don’t always provide us with answers. The humanities push us to ask different questions. So, the humanities don’t just interpret the world, they help to shape the future of it. As a Jesuit-Marymount university, LMU is called on to not just critique injustice but to reimagine what justice could look like in a world that is continuously shaped by AI.


Read: AI’s Impact on Artists


It seems that while identifying individual injustices is a concern, a broader concern is to lay a foundation that will enable the future development of AI to be more comprehensive and inclusive. Is that correct?

Correct. Our proposed project addresses a pressing need to rigorously explore the ethical and social justice implications of AI technology, particularly in relation to disability. On one hand, disability studies challenges frameworks that aim to fix or eliminate disability through medicalized, individualized intervention. So, for disabled communities, findings about the cultural biases of AI raise concerns that AI technology that is designed for their use may reinforce a harmful narrative, one in which disability is a problem to be fixed or eliminated. While those are major concerns, our project also aims to go beyond those issues using the unique insights of disability studies to explore the promises and challenges of AI. Disability studies pushes us to consider how AI technologies reduce complex human experiences into simplified categories that don’t account for the variability of human experience. So, the questions that drive our project are: How do emerging technologies reshape the boundaries between ability and disability? How do they redefine societal expectations about productivity, communication, and embodiment? We’re asking questions that are often overlooked in more traditional approaches in the study of technology.

What specific forms will the project take?

During the next three years, we will fund nine faculty fellowships in the context of AI and disability studies. Second, we’ll launch nine community-based learning courses during that time. Each of our faculty fellows will teach a course in their discipline that engages with disability studies and AI, and they’ll work with students to connect to projects in the community. Last, each fall in the next three years, we’ll host a symposium whose goal is to bring together scholars, technologists and disabled users as we think about how to develop AI-driven technology.

What’s your greatest hope for this project?

For me, the goal is not simply educating future engineers or ethicists. Rather, we’re forming people who will shape the moral architecture of our technological world. And that’s what the humanities and a Jesuit-Marymount education do at their best.


Read: AI — Fear Or Friend It?