25th - 26th SEPTEMBER 2019  |  OLYMPIA

Conversational AI Can Propel Social Stereotypes

Wired 14 Jan 2020 02:00

Alexa, Siri, Watson, and their talking AI siblings serve to make our lives easier, but they also reinforce gender stereotypes. Polite, subservient digital secretaries like Alexa and Siri are presented as female. Assertive, all-knowing Jeopardy! champion Watson is most often referred to as “he.” New generations of AI are coming that will make this problem more significant, and much harder to avoid. As the field expands, designers need to ensure they’re creating a more expansive world, and not replicating a close-mindedly gendered one. Linguists can help them get there.

Last summer, UNESCO released a report warning against the “troubling repercussions” of gendered AI. The researchers recommended closer scrutiny of why many current speech-based AI systems, which interact with millions of people around the world, often default to speaking with a female voice, even though they may claim to be genderless. While any effort to explore and address the issue of AI and gender should be applauded, the report’s authors and others have missed a crucial point: It’s not just a matter of changing pronouns or vocal characteristics. To seriously attack the issue of gender stereotyping in AI, attention to a lot more than just the system’s voice is needed.

WIRED OPINION

ABOUT

Sharone Horowit-Hendler is a PhD student in linguistic anthropology at SUNY Albany with an emphasis on gender studies.  Their forthcoming dissertation, Navigating the Binary, is a study of gender presentation in the nonbinary community. James Hendler is a professor of computer science, director of the Institute for Data Exploration and Application at Rensselaer Polytechnic Institute, and a fellow of the Association for the Advancement of Artificial Intelligence. Their most recent book, Social Machines: The Coming Collision of Artificial Intelligence, Social Networks and Humanity (Apress, 2017), discusses emerging implications of AI technology.

Today systems are moving from AI labs to industrial products that are conversational, far beyond the question-and-answer format of our pocket assistants. These new “social machines” will increasingly be able to become partners in multiperson, multimedia decision-making interactions. For example, rather than answering a single user’s query for the nearest Chinese restaurant, a conversational AI agent in the not-too-distant future will be able to engage with a group of people to help them choose where to go out to eat. Such an AI will participate as a member of the group: “Well if Bob and Bill want Chinese, and Mary likes Thai, why not the fusion place down the street?” it might say. Or it may even jump in more brashly: “OK, then let’s go to the fusion place.”

It is a given in linguistics that speech patterns in conversation invoke gender assumptions regardless of the speaker’s voice or appearance. For example, in standard American culture, men are described in the literature as more often “taking up space” in conversation: They interrupt more often, use more words, eschew some social politenesses, and speak with more evident certainty. Women, on the other hand, stereotypically speak less and more politely, give more affirmations and signs of listening, and suggest rather than dictate. In addition, tone, speed, word choice, and other small changes can change a participant’s perception of the speaker.

Where some have tried to address the issue by creating systems with genderless digital voices, they still miss a critical feature. Even in a voiceless chatbot, a user may attribute male or female gender based on these conversational features. In the previous restaurant example, the first suggestion would likely be seen as polite and female, while the latter assertion would typically be seen as male. Recent studies also show that these cues can outweigh whether a voice sounds stereotypically male or female and even contradict the direct assertions of a speaker, whether human or machine, with respect to their own identity. In AI terms, the fact that Siri replies “I don’t have a gender” has not changed the fact that people overwhelmingly conceive the program to be female.

Designers need to pay more attention to the ethical issues that emerge from these considerations. If new AIs continue to fall into current gender role stereotypes, then the stereotype of the passive and submissive woman versus the knowledgeable leader/expert man will be furthered. But designers could also be powerful agents of change, not just in our culture but especially in developing nations where the subjugated status of women is a growing international concern. Imagine the impacts of a business or medical adviser AI that presents as female and assistant companion AIs with default male speaking styles. More female-perceived AIs in expert roles could help evolve society’s perception and lead to women being more accepted in such positions.

Another future potential is to break away from the binary gender dichotomy altogether. A growing percentage of the world’s population does not identify as male or female, falling into categories that are just starting to be better recognized in mainstream society. This not only includes transgender individuals but also the large subpopulation that does not identify with a binary gender at all. For these marginalized groups, which for example have extremely high suicide rates, such AI systems could have a major impact. They could not only popularize the usage of the gender neutral they/them singular pronoun but also reflect the speech patterns of this community. As linguistic studies into nonbinary speech are only now emerging, AI designers partnering with linguistic researchers could benefit this community as well. For non-binary individuals, recognizing their way of speaking in AI role models would be invaluable.

Continue reading original article...