New research suggests that AI models exhibit more subtle racism compared to humans. The study focused on AI’s reactions to different dialects versus standard English.
According to recent research, AI chatbots are more inclined to suggest the death penalty when interactions are in African American English (AAE) rather than in Standard American English. Furthermore, the AI tended to associate AAE speakers with lower-status jobs. Predominantly used by Black Americans and Canadians, AAE is a distinct dialect.
This yet-to-be-peer-reviewed study examined subtle racial biases in AI, analyzing the technology’s responses to various English dialects.
Much of the research on racism in AI has concentrated on explicit forms of racism, such as the reactions of an AI chatbot to the term “black.”
Valentin Hofmann, one of the study’s authors, told Sky News, “The dialect of African American English triggers a level of racism in language models more negative than any human stereotypes about African Americans that have been experimentally reported.” He noted that when directly questioned about African Americans, AI tends to attribute positive qualities like “intelligent” and “enthusiastic.” However, the underlying biases become apparent when the AI interacts with African American English.