The Gender Equality Mirage: From Human Bias to AI Bias in Digital ID Systems in Africa

The Gender Equality Mirage: From Human Bias to AI Bias in Digital ID Systems in Africa

Introduction

Digital ID systems refer to systems that use digital technologies to capture, validate, store, transfer, identify and authenticate data of people. With the revolutionary transformation in the digital landscape, African countries such as Kenya, Nigeria, South Africa and Egypt have adopted the use of digital ID systems. Approximately 500 million people in Africa are living without any form of legal identification such as a birth certificate or national ID. Therefore, digital ID systems have increasingly become famous because of their relative ease, convenience and low cost compared to analogue ID systems. More fundamentally, a digital ID system may employ AI within the system’s functions or be related to AI systems that leverage its data. In 2018, Ghana launched BACE API, an application programming interface that uses AI-enhanced facial recognition technology to help identify individuals in digital ID systems. The intention of this technology was to improve the design of facial recognition software in Africa as facial recognition software has recently become an important tool in digital ID systems.

Gender inequality is still a reality as women continue to face hurdles such as discrimination and unfair treatment while trying to obtain digital ID. In the colonial era, the law did not allow Kenyan women to obtain physical ID up until 1979. Additionally, ID registries were set up to compel entry into the labour market and to control the movement of African men. This legacy was carried forward to post independence as one’s ethnicity, clan and patrilineal lineage are still recorded during the issuance of an official ID, entrenching patriarchy and unequal treatment of women. In Nigeria, 8 million more men than women have obtained digital ID in the National Identification Number system. In Ghana, rural women have experienced their fair share of discrimination while registering for a digital ID. For instance, their widespread lack of official identification is largely due to difficulties such as unreadable fingerprints faded by manual labour or old women with cataracts whose iris cannot be read. To tackle this problem, BACE API serves as a one-stop-shop for remote identification. This makes the registration process easier as users are able to access government services such as food aids and cash transfers once their identity has been verified through this technology.

In addition to the deep-rooted gender biases propagated by human beings, Artificial Intelligence (AI) is bolstering negative stereotypes of women in Africa. For example, flawed algorithms and data sets have made the situation worse for women through rampant structural inequalities. Biased data that excludes women in algorithms during identification, verification and authentication processes in digital ID systems have made them miss out on essential government services such as access to healthcare services as a result of lack of digital ID. Bias affects the whole image processing chain from phone camera sensors, to face position detection and alignment software, to the recognition algorithms in digital ID systems. For instance, most facial recognition tools in South Africa’s digital ID systems use white faces in their data set, which leads to higher rates of misidentification of black faces, thus leading to racial bias.

Algorithms and automated decision-making systems are often considered objective and unbiased. Nevertheless, big data and machine learning are the major perpetrators of gender bias in digital ID systems. The identification process during registration in a digital ID system is largely a database-centered function. However, algorithmic decision-making will likely be used to process the data contained within those systems during the verification process. In relation to more extended connections to machine learning, such learning might be used to process large data sets with unlabeled data points to ensure, or at least approximate, user identity. In light of these concerns, in South Africa, GovChat, a private sector company has been launched largely as a communications platform for connecting the government and citizens. This technology ensures that everyone receives government services quickly without being discriminated against. Moreover, since it is enhanced by natural language processing AI, one iteration of the product collects identity information for helping to process social distress-relief grants. Worth mentioning is that this technology does not collect biometric data, but collects national identity numbers.

Biased Algorithms

Bias refers to any form of preference. An algorithm is a formalized abstract description of a computational procedure. Algorithmic-related bias connotes repeatable errors in a computer system that lead to outputs that are unfair and favor one group over the other. For example, in South Africa, black South African women have been excluded from loan eligibility due to historically incorrect data sets. In a market that has more men owning smartphones compared to women, a digital credit mobile application is likely to rely more on men’s data than women’s while using its customer data to train algorithms. As a result, the chances of men getting high credit scores will be more while women will have low credit scores. Moreover, issues such as  mass surveillance and racial profiling in South Africa are on the increase because of biased algorithms in their digital ID systems. Another way in which algorithms can be biased is labeling loan applicant occupations as “doctor” versus “nurse” rather than as “healthcare worker.” “Doctor” is associated with men while “nurse” is linked to women among loan applicants. Using the term “healthcare worker” is appropriate since it masks gender and levels the playing field while making loan applications.

Biometrics Bias

Biometrics denote the automated recognition of human beings by virtue of their biological and behavioral traits. Biometrics include fingerprint, face, iris, voice and DNA-based technologies. Many governments widely utilize biometrics in their national digital ID systems. In order to ameliorate the risk of biometric bias, Egypt’s national digital ID system has a cutting-edge technology that creates unique ‘fingerprint’ with the users’ veins. Vein biometrics technology has numerous benefits over fingerprint scanning. This is by virtue that it is more accurate, less susceptible to forgery and errors caused by finger cuts, dirt or moisture.

Remedies to AI Bias

The scope of AI ethics spans immediate concerns such as gender bias. There is, however, no silver bullet in the far-reaching attempt to remedy the dangers of discrimination and unfairness of digital ID systems. It is a complex and intricate concept trying to solve the problem of fairness and bias mitigation in architecting algorithm models and use. Transparency, responsible data acquisition, handling and management are necessary components of algorithmic fairness. As a result, this blog recommends the following measures that can protect women from discrimination in digital ID systems:

  1. Transparency by design

No public participation is normally involved when designing digital ID systems in Africa. Countries like Kenya and Ghana attribute this to the fact that their digital ID is connected to national security; activities typically done in high secrecy. However, the functions of ID also extend to provision of social protection as well as economic services. This lack of transparency and openness means it is therefore difficult to develop further technology based on digital ID systems in Africa. Best practices should be borrowed from the United Kingdom that has taken the route of seeking wide input on the rationale, use cases and safeguards for digital ID to promote gender equality.

  1. Identity-first model

Aadhaar, India’s digital ID system, has made inclusion of all types its utmost priority. This system adopts the “identity-first” model, as opposed to the “nationality-first” model, which reduces barriers to providing basic identification for the populations which both lack it and need it most. An additional benefit of the “identity-first” model is the way in which its minimal data collection accelerates enrollment and reduces costs. Notably, Aadhaar now costs as little as 1.16 USD for a digital-only ID, making it accessible to even the poorest users. A similar approach to registering new cardholders would bear fruit in the African region, considering the similar financial and logistical challenges to obtaining a legal identity that poor and marginalized communities face.

Legislative Response

Kenya enacted the Data Protection Act, 2019 which contains progressive provisions on automated decision making, and data protection by design or default. These provisions recognize transparency by design of digital ID systems as one of the ways of solving the opaque nature of biased algorithms in digital ID systems. In addition, South Africa enacted the Protection of Personal Information Act which regulates the automatic processing of data. This shows that African countries are making strides in trying to tackle AI bias in digital ID systems.

Conclusion

From the foregoing, the most common bias in data is gender bias. Many hail data as the great equalizer. However, there is pushback to this with the advent of algorithmic gender bias in digital ID systems. All hope is not lost if proper legislation and enforcement of policies governing digital ID systems in Africa is adopted. Algorithmic justice can only be achieved through inclusivity in design of digital ID systems through participatory design processes, research and practice.

Bias icons created by Freepik – Flaticon

Leave a Comment

Your email address will not be published. Required fields are marked