As a social computing researcher based primarily out of Communication and Human/Computer Interaction, my research is at the intersection of user adaptation and platform responsibility. Fundamentally, I believe that social technologies have a tremendous potential for good, but have thus far been designed and implemented in ways that worsen existing problems while introducing novel challenges to our social processes. This breaks our theoretical models of these important social processes while causing harm to users. Users, of course, are the ones who ultimately have to adapt to the realities of these systems in order to use them, but presently lack the knowledge to do so effectively – or are actively denied this knowledge by platforms.
Social technology does not need to be this way – it’s not too late to pursue the original optimism behind the internet. However, to realize this vision, users need a robust literacy which helps them adapt in the present moment, matched by a longer-term effort by platforms to re-design these systems to reach their full positive potential without creating additional harms. Accordingly, I pursue a program of research which leverages deep qualitative engagement with users to update our theoretical models of social phenomena to account for the novel effects of social technology, identify paths towards increased user literacy of these systems, and provide guidance to designers on how to best mitigate negative effects. I do so across platforms instead of on a platform-by-platform basis, both because this is how users approach the space of social platforms, and because I am interested in approaches and models which can endure in a rapidly-evolving platform landscape.
Right now, I pursue these goals through two complementary streams of research: one on folk theorization, algorithmic/platforms literacy, and self-presentation, and another on online spaces and novel online methods for marginalized communities.
I believe in coming at an issue through a few different lenses before assuming I have my small fraction of the full picture, and as such I tend to pursue extensible, sustained qualitative work which lets me root my findings deep within the lived experiences of participants. Whenever possible, I take a human-centric approach which engages directly with users and their communities.
My primary analytical lens is constructivist grounded theory, supported by robust data collection via multiple elicitation methods, ranging from classic interview techniques and visual elicitation methods to translational survey and psychometric development work. I also work to develop novel online methods, such as my work expanding the Asynchronous Remote Community framework into a full platform for purpose-built virtual field sites.
I also tend to draw on sensitizing concepts and literature from several different disciplines; I base myself in social computing and HCI, but frequently reference the literature and methods of communication, management information systems, information science, cognitive science, social psychology, anthropology, sociology, education, and, on occasion, political science.
Understanding and Adapting to Algorithmic Environments: Self-Presentation and Literacy via Folk Theorization
One of the most disruptive things modern social technology does is complicate existing social processes, like self-presentation. We had a good idea of how these processes worked offline, and even with early online tech like chat, but our understandings are less clear when it comes to the constantly-changing, algorithm-driven platform landscape. Importantly, this new environment throws up new challenges for users to tackle, such as computational systems (e.g., feeds) which obscure cues as to who is in one’s audience while breaking our existing models of these processes.
To tackle these challenges, I have developed an approach based in user folk theories, the informal, socially-informed, quasi-causal understandings of how platforms function which guide users in on-platform decision-making. Folk theories operate at the user level, and focus on existing user understanding. Compared to other approaches such as mental models, this allows me to faithfully capture the motivations behind emergent behavior as well as the informal, often emotionally-charged relationships between users and platforms. By applying this lens to the study of self-presentation, I have updated our models of the process to reflect the realities of algorithmic systems while identifying key opportunities for platforms to help educate and improve user folk theories. In turn, this has led to current work where I have begun to develop an approach to measuring and boosting what I call platforms literacy via improving folk theory information.
In the future, I intend to continue developing a concept of and tools for promoting platforms literacy, and will expand my folk theories work to other domains where social and societal processes are under algorithmic threat.
Publications/Projects In Progress
- Dissertation Completed December 2020: User Adaptation to Constant Change in Algorithmically-Driven Social Platforms (CHI 2019 Doctoral Consortium – poster | preprint | official version)
- How People Form Folk Theories of Social Media Feeds and What It Means for How We Study Stel-Presentation. (CHI 2018 – preprint | official version)
- “Algorithms ruin everything”: #RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media. Proceedings of the ACM Conference on Human Factors in Computing Systems, 2017. (CHI 2017 – preprint | official version)
- Platforms, People, and Perception: Using Affordances to Understand Self-Presentation on Social Media. (CSCW 2017 – preprint | official version)
Organizing & Agenda-Setting Work
- The Algorithm and the User: How Can HCI Use Lay Understandings of Algorithmic Systems? (CHI 2018 Panel – preprint | official version)
Methodological Innovation to Support Online Spaces for Marginalized Communities
Social technology has brought outsized benefits to marginalized groups. Take my own group, the LGBTQ+ community, as an example: social platforms are crucial sources of community, solidarity, connection, identity exploration, and social support, as well as essential health information. There is no real offline equivalent – these platforms are a necessary support structure. At the same time, however, these same platforms have opened us up to more frequent and intense harassment, and also impose sociotechnical structures upon our communities that can act to reinforce toxic intracommunity power dynamics, inflaming old conflicts while introducing brand new ones. LGBTQ+ people, and all marginalized folks, need these online spaces – but we also need them to be better.
In this part of my research, I focus on how to preserve the positive benefits of platforms for marginalized groups while also mitigating the outsized impacts of platform-related problems. Funded by competitive local grants on which I am the PI, I directly engage with the LGBTQ+ community. Importantly, for me, that means the community as a whole – prior LGBTQ+ work in HCI has largely focused on only a minority of the community, cisgender gay men, in the context of single platforms, leaving us with models that did not account for a multiple-platform ecosystem or the distinct needs of other groups within the community, e.g. bisexuals and nonbinary individuals. This led us to models and solutions that, as a bisexual, transfemme/nonbinary person, I can confidently say are incomplete. As such, in this work, I prioritize methodological innovation as a way to address the lack of full-community inclusion by developing all-online qualitative methods which lower barriers to research participation.
Publications/Projects In Progress
- “Too Gay for Facebook:” Presenting LGBTQ+ Identity Throughout the Personal Social Media Ecosystem. (CSCW 2018 – preprint | official version)
- “‘More Gay’ Fits In Better:” Intracommunity Power Dynamics and Harms in Online Spaces. (CHI 2020 – preprint | official version)
- Values (Mis)alignment: Exploring Tensions Between Perceived Platform and LGBTQ+ Community Design Values (CSCW 2021 – preprint | in press)
- “Facebook Promotes More Harassment:” Social Media Ecosystem, Skill, and Marginalized Hijra Identity in Bangladesh (CSCW 2021 – preprint | in press)
Organizing & Agenda-Setting Work
- Social Technologies for Digital Wellbeing Among Marginalized Communities (workshop at CSCW 2019 – preprint | official version)
- Queer(ing) HCI: Moving Forward in Theory and Practice (SIG at CHI 2019 – preprint | official version)
- ARC: Moving the Method Forward (SIG at CHI 2019 – preprint | official version)
Algorithmic Information Curation & Values
On hold until my dissertation is finished – but always on my mind:
Information was never a direct feed; we used to have humans called “editors” and “reporters” and “opinion leaders” between us and the theoretical mass that is “information.” Now, more and more, we have algorithms. We need to understand how these algorithms shape the flow of information to individuals, and how these algorithms differ from the gatekeepers we’re used to. Algorithms are often seen as unbiased alternatives to editors, but this just isn’t true – they’re programmed by humans, and therefore have human biases. We need to understand what they are.
- From Editors to Algorithms: A values-based approach to understanding story selection in the Facebook News Feed. Digital Journalism, 2016. (preprint | version of record)
If you’re interested in working together on research, I’m always open to collaboration. It makes my projects better pretty much 100% of the time. Get in touch with me.