Whether it’s through ChatGPT, Siri, transcription and translation devices, or any other number of new technologies, artificial intelligence has swiftly entered our daily lives. But as AI becomes ever more present, how do we ensure that it helps users without causing distress? Further, how do we address the biases present in the design that create barriers for Black users?
At the April 15 AI And Machine Learning Symposium — part of Howard University Research Month —two researchers shared how they are addressing these questions directly.
Data Sets for Black voices
Automated speech recognition systems (ASRs) are becoming more and more integral to our lives, from conversational apps like ChatGPT to tools like Siri and automated transcription and captioning. But as versatile as these systems are, there are built-in biases can create barriers for Black users. A lack of African American English data means Black users experience significantly higher error rates, reducing the accessibility of devices often necessary for communication.
Dr. Lucretia Williams, senior research scientist at Howard’s Human-Centered AI Center, summed up the challenge during her presentation, saying “you shouldn’t need to code switch to use technology.”
Partnering with Google, she and her colleagues gathered 600 hours of African American speech audio. To do this, the researchers hosted eight events across the country, asking communities what their experiences were with AI and how they see Black culture’s impact on technology. From these events, they pointed attendees to Project Elevate Black Voices, where they were invited to complete 10 audio survey questions a day for three weeks. To encourage participants to speak in their natural dialect, many of the questions were intentionally informal and focused on the Black community specifically.
Similar to Robinson’s work, Williams’ research demonstrates the importance of remaining focused on people when designing technology.
“I am a community-based researcher,” she said. “We wanted to provide a human element while using it to improve this technology, and wanted to make sure we collected data sets with the human in mind.” She also emphasized the importance of building trust with the community, including by creating and enforcing Dataset Fair Use Guidelines.
This new dataset, owned entirely by Howard University, will be made publicly available this month.
Understanding our AI “friend” and its Psychological Impact
AI is influencing our daily lives with increasing frequency, to the point that, for many, the technology is closer to a pet or a friend than simply a tool.
Dr. Denae Ford Robinson, principal researcher for Microsoft Research, focused on this phenomenon and the risks it poses in her presentation, “Unpacking the Psychological Risks of Using AI Conversational Agents.” Robinson’s work focuses on how conversational AI such as ChatGPT impacts psychological wellness.
“There’s certain tools that have become companions for folks — friends if you will, therapy agents, and love bots,” said Robinson. “Although these have been increasing in quantity, there’s been limited research to really understand how these AI social bots and chat bots can provide more meaningful social and emotional support, and honestly, they may be jeopardizing real relationships, because we really haven’t dug deep into what the consequences of them are.”
To help fill that knowledge gap, Robinson and her colleagues investigated the experiences of over 200 users who have been experiencing a psychological challenge. Based on their stories, the researchers developed a framework of understanding AI’s psychological impacts through a series of scenario-based workshops.

“We identified 19 agent behaviors and over 21 psychological impacts, and the impact of this was being able to have language that we could hand off to our AI red teams,” said Robinson, referring to teams of developers dedicated to making AI systems secure and identifying blindspots and risks to users.
Along with helping to ensure that the systems are not giving poor advice or adding to the emotional distress of users (especially young users), this work serves as a reminder for researchers of the value of qualitative, person-focused research in AI design.