The Default Gender in AI Assistant Technologies: Possible Impact on Women in Africa

The Default Gender in AI Assistant Technologies: Possible Impact on Women in Africa

Introduction

Artificial Intelligence (AI) assistant technologies such as digital voice assistants and chatbots have become increasingly ubiquitous and powerful in Africa. In a datafied society, AI assistant technologies learn a user’s history such as purchase preferences, home ownership, location, family size to answer complex questions, provide recommendations, make predictions, and even initiate conversations. In Africa, service providers such as banks, hospitals, insurance companies, are adopting the use of AI assistant technologies to meet their growing customer base. For example, many South African health insurance companies such as Discovery are using AI-powered chatbots which allow clients to access up-to-date information on maintaining a healthy lifestyle. Such information includes scheduling annual checks such as pap smears, exercising regularly and purchasing healthy food. In Kenya, Jacaranda Health is a non-profit whose AI assistant technology improves maternal health by providing mothers with free, lifesaving advice and healthcare referral. Jacaranda Health chatbot is already being used by 120,000 pregnant women and new mothers across 200 hospitals in Kenya. Nigeria has a fintech start-up called Nomba that is tackling financial inclusion of women through a text messaging application. This AI chatbot helps users pay bills through a simple conversational manner and make financial transactions. This is owing to the fact that AI assistant technologies improve productivity and profit-making. Researchers and practitioners widely acknowledge the potential social and economic benefits of using AI assistant technologies, including increased efficiency, reduction of costs & human error, and enhanced customer experience.

Most tech companies have the default gender in AI assistant technologies with a female voice and name; reflecting patriarchal ideology. In Africa, AI assistant technologies with default female voices, names and characters reproduce harmful gender stereotypes about the role of women in society and the type of work women perform. They instill a culture that women should take orders from men and that they are subordinate to men. These technologies draw on stereotyped gender roles in offering digitalized secretarial work, traditionally performed by women. Moreover, as these technologies operate on the command of their user, with no right to refuse or say no, they arguably raise expectations for how real women ought to behave. A few examples include: Ghana’s Abena, Kenya’s Sophie Bot, Amazon’s AlexaMicrosoft’s CortanaApple’s Siri, Samsung’s Bixby and  Google Assistant that are all highly feminized by design. The frequent use of female voice merits close scrutiny because of the impact on perceptions of women, as well as existing sociocultural expectations, stereotypes and demands regarding how women are expected to act in society. The challenge for Africa is that a majority of the voice data used to train algorithms are held by a few tech companies such as Amazon, Microsoft, Apple, Samsung and Google. This makes it difficult for tech companies in Africa to develop high quality AI assistant technologies that are gender neutral.

This blog post critiques the reproduction of gender bias in the discourse of AI assistant technologies. This analysis aims to identify female gendering of AI assistant technologies considering issues in stereotypic patterns of verbal abuse. This begs the questions: why do most AI assistant technologies have female names & voices, and why do they have submissive personalities? It also examines the regulatory responses and the role of data protection laws regarding AI Assistant technologies in Africa.

Stereotypic harms

In Nigeria, the increasing use of gendered AI assistant technologies in commercial banks has the following potential impact on Nigerian women working in the financial sector: First, women working in customer service, brand representation and sales, face critical scrutiny from the public because the gendered AI assistants in these sectors portray their personality and appearance as polite, calm and less-intelligent. As a result, this assumption influences social norms around women’s ability, capacity to deliver at work and personality. Second, the stereotype that women are good communicators and always available creates a harmful environment for verbal abuse from customers who may be dissatisfied with the services rendered. Gendered AI Assistant technologies could therefore introduce and impose new forms of gendered expectations upon women. For instance, many AI assistants today already are marketed on the premise of an ever-ready, ever-available, polite assistant. For instance, Nigeria First City Monument Bank’s Temi is given the following description: “Hi! I’m Temi, your personal person. I’ll always have time for you any time of the day. Ready to discuss your plans be it health, travel or even future goals. The good news is, I get things done and I’ll never reply to you with a ‘k’” Such statements denote that AI assistant technologies are better at communicating than women and there is an expectation during recruitment processes that women should stick to soft and feminine roles.

This blog post argues that tech companies have already preconditioned users to fall back upon antiquated and harmful perceptions of women when AI assistant technologies have voices as female-sounding by default. For example, when verbally abused, Siri tends to chime in with a sly comment or deflect aggression. As a result, this creates habits that view harassment as normal. Feminized AI assistant technologies have become targets for verbal abuse. In 2021,Apple removed the default female voice setting, replacing it with a mechanism for users to choose among several voices during device setup – a course of action that should be mirrored by other technology companies. All tech companies should proactively prompt users to personalize their AI assistant technologies from the onset to their preferred voice. This ensures that the default voice is not only female.

UNESCO in its 2019 report titled “I’d blush if I Could” elucidates that the proliferation of female-gendered AI assistant technologies is primarily manifested either during the algorithm’s development process, training of data sets or automated decision making. Judging from this, tech companies have failed to build proper safeguards against hostile, abusive, and gendered language in AI assistant technologies. Some tech companies credit their choice of gendered AI assistant technologies by referencing studies which indicate that people generally prefer a female voice to a male voice. Such research indicates that customers want their AI assistant technologies to sound like women; therefore, companies assert that they can optimize profits by designing feminine-sounding voice assistants.

Regulatory Responses

There is a dearth of data on regulation of AI assistant technologies in Africa. AI assistant technologies are not regulated as such in Africa. This is because, just like other types of technologies, they are just tools. Generally, it is the use of a technology that is regulated, for instance, the purposes for which an AI assistant technology is used and/or how it is used, rather than the technology itself being regulated. By virtue that AI assistant technologies are software applications, this means that regulations that apply to software and software services apply to them as well. Tech companies should therefore consider the following questions:

  1. What intellectual property issues arise in relation to the use or implementation of the AI assistant technology?

  2. Who owns rights in the AI assistant technology source code itself?

  3. Where operation of the AI assistant technology involves an element of Machine Learning, who owns the rights in the resulting model and outputs?

  4. Are there licensing issues?

  5. Is the AI assistant technology service provided by a third party and what contract terms apply to its use?

The proposed European Union Artificial Intelligence Act once approved, will be the world’s first rules on Artificial Intelligence. How AI assistant technologies will be defined in the Act, will determine whether the Act will regulate such technologies or not. The EU AI Act will also prohibit altogether the marketing or use of certain types of AI systems, so again AI assistant technology use would be prohibited to the extent an AI system for one of those prohibited purposes is involved, for example, AI assistant technologies harmfully exploiting vulnerable people. Certain AI systems will be considered ‘high risk’, again based on their purpose rather than whether they involve the use of bots. High-risk AI systems are subject to a long and detailed set of requirements.

The 2020 Global Government Artificial Intelligence Readiness Index lists the following top five African countries that are making strides in AI regulations: Mauritius, South Africa, Seychelles, Kenya and Rwanda. Tunisia and Egypt have also been lauded for their progress. In addition, Mauritius is credited for being the first African country with a fully formalized national AI strategy that consists of Mauritius Artificial Intelligence Strategy, Digital Government Transformation Strategy 2018–2022 and the Digital Mauritius 2030 Strategic Plan. The government of Mauritius has also announced that it will establish a Mauritius Artificial Intelligence Council (MAIC). Furthermore, Nigeria is also leading in AI regulation after it launched its publicly-run Centre for Artificial Intelligence and Robotics in Nigeria (CFAIR) in November 2020. Other countries are establishing task forces with the sole purpose of developing national AI strategies. For instance, in February 2018, Kenya instituted the Distributed Ledgers Technology and Artificial Intelligence Task Force. This blog post posits that in the absence of adequate AI regulatory responses, it is difficult to mitigate the gendered impacts of AI assistant technologies.

The Role of Data Protection Laws

Data protection law is a possible remedy to the stereotypic harms of discrimination raised by the design and development of AI assistant technologies. The African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention) is a normative framework consistent with African legal, cultural, economic and social environment when it comes to data protection laws in Africa. Notably, data protection law could help mitigate risks of algorithmic discrimination in AI assistant technologies. More specifically, this blog post contends that Data Protection Impact Assessments play a crucial role in mitigating the high risk of societal harm posed by gendered AI assistant technologies. This blog post postulates that the resultant societal harm of gendering AI assistant technologies could be mitigated in the following ways:

  1. The policy positions of the African countries as set out in their policy documents on AI and related technologies and which specify, in particular, the need for such technologies to meet legal and ethical standards, should be revised to consider not just how AI technologies may produce or reproduce social biases, but whether they encompass social biases within their very design.

  2. In line with Data Protection Impact Assessment requirements, tech companies in Africa and those from the west such as Amazon, Apple, and Microsoft should review the default voice of their AI assistant technologies as female, the marketing of these products, and critically, address the responses of their AI assistant technology where demonstrated to portray stereotyped and heteronormative female characterizations.

  3. Tech companies such as Apple and Microsoft could contribute toward making the labour involved in the production of their AI assistant technology more visible by, for example, giving more credit to the female actors who play Cortana (Jen Taylor) and Siri (Susan Bennett).

  4. Tech companies should consider adding a gender-neutral voice option to such technologies, such as Q, a genderless voice technology currently on the market.

  5. Service providers should obtain ample variety of data, whether gendered or gender-neutral, for training the models embedded in their devices, in order to decrease the biased nature of these services.

Conclusion

African countries can borrow best practices from the European Commission that offers guidance for adopting Ethics by Design approach while designing, developing, and deploying and/or using AI assistant technologies. As consumer adoption and use of AI assistant technologies increase, it is time to scrutinize and improve their portrayals of gender.

Bot icons created by Freepik – Flaticon

Leave a Comment

Your email address will not be published. Required fields are marked