Rahaf Alharbi

Rahaf looking straight to the camera and smiling. She is wearing a black t-shirt and standing behind green plants. Rahaf used to have long black hair with dyed maroon ends.

I am a Ph.D. candidate in the School of Information at the University of Michigan where I am luckily advised by Dr. Robin Brewer and Dr. Sarita Schoenebeck. I work towards a future where disabled people have agency and control over their data. Specifically, I study the privacy implications of common, off-shelf, visual assistive technologies and I take community-centered approaches with Blind people to prevent and subvert privacy harms.

My work draws from disability studies to understand and resolve the harms of AI-enabled privacy techniques in various stages of AI development (i.e., from problem formulation, dataset creation, to end-user interaction). Through in-depth qualitative research with Blind communities, I argue that while such ‘state-of-art’ visual privacy approaches offer increased autonomy, they also raise significant concerns related to cultural representation (e.g., will these systems to be inclusive of the privacy needs of Blind people all over the world?) and trust (e.g., how would Blind people people confirm the accuracy of outputs by computer vision models?). In future work, I’m researchering how to address such tradeoffs through co-desgin workshops with Blind people.

In addition to independent research, I enjoy vibrant and collaborative environments. Throughout my Ph.D., I’ve worked with over 15 researchers in multiple disciplines (e.g., computer science, engineering, and communication) to support Blind people in creating their own visual assistance technologies, contribute a theoretical framework for participatory AI/ML design with disabled people, and center the prespectives of women around the world on justice-oriented repairs to online harassment.

Prior to graduate school, I obtained my Bachelor of Science degree in Mechanical Engineering (minor in Ethnic Studies) at the University of California, San Diego.

I interned at Microsoft Research with the Ability team. Most recently in summer 2023, I interned at Meta with the Responsible AI team.

Email  /  CV  /  Google Scholar  /  Twitter

Updates

Journal and Conference Publications

(rigorously peer-reviewed and archived)

Illustration of Deaf ASL user that has a laptop and a mobile device setup. On the laptop, there is a video conferencing interface that shows the video grid of in-person attendees and three other remote attendees. Also, there is a mobile device, standing upright, with a video image of an ASL interpreter.

Accessibility Barriers, Conflicts, and Repairs: Understanding the Experience of Professionals with Disabilities in Hybrid Meetings

Rahaf Alharbi, John Tang, Karl Henderson

CHI 2023

PDF / ACM DL / Talk

We interviewed 21 professionals with disabilities to unpack the accessibility dimensions of hybrid meetings, highlighting the creative ways professionals with disabilities developed workarounds and repairs to these accessibility tensions. In the paper, we discuss how invisible and visible access labor may support or undermine accessibility in hybrid meetings. We also argue that hybrid meetings are an important accessibility resource. Building from our analysis, we offer practical suggestions and design directions to make hybrid meetings accessible.

Illustration of participant trying to use Seeing AI to read mail, but they frustrated because Seeing AI keeps repeating the same information as they slightly shift their camera

Hacking, Switching, Combining: Understanding and Supporting DIY Assistive Technology Design by Blind People

Jaylin Herskovitz, Andi Xu, Rahaf Alharbi, Anhong Guo

CHI 2023

PDF / ACM DL / Talk / Dataset

Current assistive technologies (AT) often fail to support the unique needs of Blind people, so they may need to become domain experts, 'hack' and create Do-it-Yourself (DIY) AT to creatively suit their need. To further understand and support DIY AT, we conducted two-stage interviews and diary study with 12 Blind participants. We found that current DIY AT is created both implicitly through creative use cases, and explicitly via ideation and development. From our results, we present design considerations for future DIY technology systems to support existing customization and ‘hacking’ behaviors Blind people develop.

First Monday logo

Definition Drives Design: Disability Models and Mechanisms of Bias in AI Technologies

Denis Newman-Griffis, Jessica Sage Rauchberg, Rahaf Alharbi, Louise Hickman, Harry Hochheiser

First Monday

PDF / First Monday DL

We reveal how AI bias stems from various design choices, including problem definition, data selection, technology use, and operational elements, alongside core algorithms. We show that differing disability definitions drive distinct design decisions and AI biases. Lack of transparency and disabled involvement exacerbate these issues. Our analysis offers a framework for scrutinizing AI in decision-making and promotes disability-led design for equitable AI in disability contexts.

GIF of a medicine bottle with the patient name being obfuscated by blurring

Understanding Emerging Obfuscation Technologies in Visual Description Services for Blind and Low Vision People

Rahaf Alharbi, Robin N. Brewer, Sarita Schoenebeck

CSCW 2022

PDF / ACM DL / Talk

Machine learning approaches such as obfuscation are often thought of as the state-of-art solution to addressing visual privacy concerns that are rampant in visual assistance technologies. We interviewd 20 Blind and low vision people to understand their perspectives on obfuscation. We found that while there are some benefits to obfuscation such as gaining more agency and safeguarding against accidental privacy leaks, there are significant trust and accessibility issues. Further, participants worried that cultural or gendered privacy needs might be overlooked in obfuscation systems. We applied the framework of interdependence to rethink current obfuscation approaches, and provided more inclusive design directions.

CSCW 2022 logo

Women's Perspectives on Harm and Justice after Online Harassment

Jane Im, Sarita Schoenebeck, Marilyn Iriarte, Gabriel Grill, Daricia Wilkinson, Amna Batool, Rahaf Alharbi, Audrey N. Funwie, Tergel Gankhuu, Eric Gilbert, Mustafa Naseem

CSCW 2022

PDF

We conducted a survey in 14 geographic regions around the world (N = 3,993) to understand women’s perceptions of harm associated with online harassment and preferred platform responses to that harm. Results show that, on average, women perceive greater harm associated with online harassment than men, especially for non-consensual image sharing.