I have several research interests, but they all center on how humans understand and are affected by algorithmically-driven systems. I am especially interested in how humans understand algorithmically-driven media and information systems, and how those systems shape human identity, information flows, and cognition. I believe this is a key area for study, as it will be an increasingly important area of policymaking, technological development and ethical/moral thought throughout the remainder of the 21st century.
For my own research, I generally gravitate towards mixed methods, as I believe in coming at an issue through a few different lenses before assuming I have my small fraction of the full picture. I also tend to draw from several different disciplines; I base myself in communication, but frequently references the literature and methods of HCI, information science, cognitive science, social psychology, anthropology, sociology, education, and, on occasion, political science.
Here are the key areas I’m doing research in. Links to presentations and publications will be added under each area as they become things that actually exist in our shared understanding of the world.
Adapting to Computationalized Social Environments
One of the big things modern tech does is add new dimensions to existing social processes, like self-presentation. We had a good idea of how these processes worked offline, and even with early online tech like chat, but our understandings are less clear when it comes to the constantly-changing, algorithm-driven platform landscape. Importantly, this new environment throws up new challenges for users to tackle, such as obscuring cues as to who is in one’s audience. In this part of my work I focus on investigating how everyday users of these algorithmically-driven computational systems are adapting to the new computational social reality, how they appropriate and resist systems, and how we can help ease the adaptation process for the average user. In the process, I am helping to update key theories in the area of audience management, self-presentation, and technology continuance.
- “Too Gay for Facebook:” Presenting LGBTQ+ Identity Throughout the Personal Social Media Ecosystem. Proceedings of the ACM on Human-Computer Interaction (CSCW), 2018. (preprint)
Understanding Computational Actors
We (HCI academics) can probably all agree that computational actors, usually in the form of algorithmically-driven systems, are everywhere, but how aware is the average person? If they are aware, what do algorithms mean to them? In this part of my work I am focused on how individuals perceive algorithmically-driven systems, particularly those that operate on social media platforms. I am especially interested in how people think algorithms affect their information flows and their opportunities for self-presentation. I take an approach primarily based in high-level affordances and user folk theories.
- How People Form Folk Theories of Social Media Feeds and What It Means for How We Study Stel-Presentation. Proceedings of the ACM Conference on Human Factors in Computing Systems, 2018. (preprint)
- “Algorithms ruin everything”: #RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media. Proceedings of the ACM Conference on Human Factors in Computing Systems, 2017. (preprint)
- Platforms, People, and Perception: Using Affordances to Understand Self-Presentation on Social Media. Proceedings of the 20th Annual ACM Conference on Computer-Supported Cooperative Work and Social Computing, 2017. (preprint)
- More to come soon – we have several projects in this area underway at Northwestern and with our partners at Stanford.
Algorithmic Information Curation & Values
Information was never a direct feed; we used to have humans called “editors” and “reporters” and “opinion leaders” between us and the theoretical mass that is “information.” Now, more and more, we have algorithms. We need to understand how these algorithms shape the flow of information to individuals, and how these algorithms differ from the gatekeepers we’re used to. Algorithms are often seen as unbiased alternatives to editors, but this just isn’t true – they’re programmed by humans, and therefore have human biases. We need to understand what they are.
- From Editors to Algorithms: A values-based approach to understanding story selection in the Facebook News Feed. Digital Journalism, 2016. (preprint | version of record)
- I Don’t Want To See This: Selective Avoidance, Friend Links, and Political Filter Bubble Formation on Facebook, presented at the 2016 conference of the Midwest Political Science Association.
Trust in Integrative Technology and Artificial Agents
Why do we trust artificial agents? Or, rather, do we trust them past the point of simple convenience? We’ve integrated algorithmic systems like search deep into our daily lives, indicating that we trust them on some level, but new research indicates that people reject algorithmic systems once they’re aware of them. How can we create lasting, trusting partnerships between humans and technological systems?
- The Sentimental Robot: Soldiers, Empathy, and Artificial Intelligence, paper presented at the 2015 conference of the International Association for Computing and Philosophy
- Let Me Google That For You: Trust, Thought and Truth in the Age of Search Engines, poster presented at the 2105 conference of the Broadcast Education Association
I am lucky enough to currently be carrying out research related to these interests at Northwestern University’s Social Media Lab with Prof. Jeremy Birnholtz. In the past, I have also had the good fortune to work on interesting research for others as part of my fellowship at George Washington University. There, I helped Dr. Patricia Phalen with research for an upcoming book on television writing and Prof. Frank Sesno with a multitude of diverse projects.
If you’re interested in working together on research, I’m always open to collaboration. It makes my projects better pretty much 100% of the time. Get in touch with me.