Research

Mutual trust and understanding are essential to preventing harmful friction in any relationship. When it comes to the AI-driven systems which are now deeply embedded into our lives, this trust and understanding can be sorely lacking. Gaps between how these systems work and how people imagine they do, and how people work and how systems imagine they do, cause friction and distrust. In turn, this friction and distrust prevents us from building productive, mutually-beneficial relationships between users and systems that would allow us to get the full potential benefit out of these systems. I work to close these gaps between people and systems by developing new theoretical frameworks for sociotechnical system design that consider how both systems and users understand and symbiotically adapt to each other.

I work at the intersection of technology and social science, drawing on Human-Centered Computing (HCC), Computer Supported Cooperative Work (CSCW), Computer Mediated Communication (CMC), Social Computing, and Cognitive Psychology. I take a qualitative, participatory approach that focuses on lived experience and perception to examine breakdowns in the relationship between people and AI-driven systems.I adopt a sociotechnical perspective on the relationship between users and AI systems to find ways to reduce misunderstandings, build trust, and promote mutually beneficial adaptation. I take a transfeminist design stance so that my work recognizes the dual necessity of centering marginalized communities while also creating systems that support all users, and the importance of embracing the complexity of relationships to this goal. 

Across my work, I focus on marginalized users, often working with communities I am a member of, such as the queer and trans community. Marginalized users experience the most extreme consequences of breakdowns in the person/AI system relationship, and who have the most experience in trying to adapt around them. This allows me to contribute both theory and generalizable design implications which can help improve the person/AI system relationship broadly, as well as specific, community-based knowledge that helps us better understand how these systems fail marginalized users and provide immediate relief to those who need it the most.

Right now, my research program has two main areas, though these areas increasingly share in-house methods and contribute to each other’s development.

I work to close the gap in people’s understandings of platforms by developing theoretical approaches such as folk theorization that capture not only user understanding, but the user’s emergent, adaptative relationship with the system. In turn, I use these theoretical frameworks to pursue goals such as boosting algorithmic literacy, or the capacity of users to understand and use AI-driven systems to accomplish their goals. These frameworks for Human-AI collaboration and algorithmic literacy will be crucial for a successful future for AI in our social systems.

Community-Based Member Research to Empower Marginalized LGBTQ+ Users

I work to close the gap in platforms’ understandings of people by developing innovative online qualitative methods which empower communities to work together to express points of friction and distrust with systems and propose solutions. I embrace my positionality as a member-researcher with a deep knowledge of both social computing research and her own marginalized communities. In turn, I enable others to do the same by assembling, securing funding for, and leading teams of LGBTQ+ junior researchers.


Latest Peer-Reviewed, Archival Publications

Transphobia is in the Eye of the Prompter: Trans-Centered Perspectives on Large Language Models

Morgan Scheuerman, Katy Weathington, Adrian Petterson, Dylan Thomas Doyle, Dipto Das, Michael Ann DeVito, and Jed R. Brubaker. 2025. Transphobia is in the Eye of the Prompter: Trans-Centered Perspectives on Large Language Models. ACM Trans. Comput.-Hum. Interact. Just Accepted (June 2025). https://doi.org/10.1145/3743676

Moving Towards Epistemic Autonomy: A Paradigm Shift for Centering Participant Knowledge

Leah Hope Ajmani, Talia Bhatt, and Michael Ann Devito. 2025. Moving Towards Epistemic Autonomy: A Paradigm Shift for Centering Participant Knowledge. In CHI Conference on Human Factors in Computing Systems (CHI ’25), April 26-May 1, 2025, Yokohama, Japan. ACM, New York, NY, USA, 26 pages. https://doi.org/10.1145/3706598.3714252

“A Blocklist is a Boundary”: Tensions between Community Protection and Mutual Aid on Federated Social Networks

Erika Melder, Ada Lerner, and Michael Ann DeVito. 2025. “A Blocklist is a Boundary”: Tensions between Community Protection and Mutual Aid on Federated Social Networks. Proc. ACM Hum.-Comput. Interact. 9, 2, Article CSCW021 (May 2025), 30 pages. https://doi.org/10.1145/3710919

Why Can’t Black Women Just Be?: Black Femme Content Creators Navigating Algorithmic Monoliths

Gianna Williams, Natalie Chen, Michael Ann DeVito, and Alexandra To. 2025. Why Can’t Black Women Just Be?: Black Femme Content Creators Navigating Algorithmic Monoliths. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 108, 1–14. https://doi.org/10.1145/3706598.3713842

Whose Knowledge is Valued? Epistemic Injustice in CSCW Applications

Leah Hope Ajmani, Jasmine C. Foriest, Jordan Taylor, Kyle Pittman, Sarah Gilbert, and Michael Ann Devito. 2024. Whose Knowledge is Valued? Epistemic Injustice in CSCW Applications. Proc. ACM Hum.-Comput. Interact. 8, CSCW2, Article 523 (November 2024), 28 pages. https://doi.org/10.1145/3687062