The Ethics and Societal Impact Gap Between Technology and Policy

The Ethics and Societal Impact Gap Between Technology and Policy

Introduction

In the digital age, technological innovation continues to advance at an unprecedented rate. However, the ability of policy frameworks to effectively guide and regulate these innovations remains under strain. While several jurisdictions, such as the European Union and the United States, have developed laws and ethical guidelines, the challenge is less about a complete lack of regulation and more about whether existing and evolving policies can keep pace with the socio-technical dynamics artificial intelligence creates.

In Africa, the conversation is no less urgent. Multiple countries have adopted digital strategies, data protection laws, and ethical guidelines on AI. Yet, the practical alignment between stated principles and the social, economic, and cultural consequences of deploying such technologies remains limited. This disconnect constitutes what we define as the ethics and societal impact gap, the space between ethical commitments on paper and their translation into lived, equitable outcomes on the ground.

The Pace Mismatch: Technology Outruns Law

Technology changes rapidly. AI systems, for example, can be developed and deployed in weeks. In contrast, lawmaking is slow. Legislatures must research, debate, and consult. As a result, regulation often reacts instead of prevents. A 2024 report by the Centre for Data Ethics and Innovation warned that public sector AI systems are advancing without adequate ethical oversight1. This problem is clear in the use of facial recognition technology. Although critics highlight risks such as racial bias and loss of privacy, many jurisdictions still lack laws to regulate their use properly2. The EU’s Artificial Intelligence Act came into force in August 2024 but will not be enforceable until August 2026. This example illustrates the global trend, policy is evolving, but slowly, while AI tools are already shaping real-world decisions.

In Africa, a number of states have published national AI strategies or are in the process of doing so, for example, South Africa, Kenya, Ghana, Nigeria, and Rwanda among them. The African Union has also developed a continental AI strategy draft. These policy efforts reflect a growing commitment to ethical governance, but gaps remain in implementation, resourcing, and public engagement.

Design Without Ethics

Tech companies often focus on efficiency and profit. Ethical questions are secondary. According to a 2024 OECD white paper, many developers fail to anticipate how their products affect different social groups3. The document found that algorithms used in healthcare, housing, and employment regularly disadvantage minorities and low-income populations. For instance, hiring algorithms can show bias. One study in 2024 found that automated resume filters rejected more applications from women and ethnic minorities, even when qualifications were equal4. The systems reflect the biases in the data used to train them. Developers therefore have an ethical obligation to interrogate the context in which their systems operate, ensuring that design choices do not reproduce historical inequities or marginalized vulnerable populations.

Wider Societal Harms

The widening gap between technological development and policy oversight presents a series of serious and interconnected risks. One of the most pressing concerns is the potential for discrimination, particularly through the use of predictive policing technologies, for example in South Africa. These tools often rely on historical data that already reflect existing social biases, leading them to disproportionately target communities that have long been over-policed. Instead of enhancing justice, such systems risk reinforcing entrenched inequalities and systemic unfairness5.

Equally troubling is the issue of surveillance and the misuse of personal data. Major technology companies routinely collect vast amounts of information about individuals ranging from browsing habits to location data often without users’ full understanding or consent. Despite the sensitivity of this information, many legal frameworks remain outdated or inadequate, leaving people with little real control over their digital identities or how their data is used6.

Democracy itself is also at risk. Social media platforms, powered by opaque algorithms, can amplify disinformation more rapidly than verified facts. During Kenya’s 2022 general elections, coordinated campaigns using bots and targeted ads spread divisive and misleading content across platforms like Facebook and Twitter. These tactics were used to manipulate public opinion, undermine trust in electoral institutions, and deepen ethnic and political polarization7. Civil society watchdogs such as Mozilla and Odipo Dev found that many of these activities were orchestrated by both local actors and foreign consultancies, highlighting the transnational nature of digital disinformation and the lack of effective legal recourse.

This confluence of issues contributes to a broader erosion of public trust. There is growing concern that no one, not governments, not companies, is truly accountable for the harms caused by digital technologies. A 2024 Pew Research study underscored this distrust, revealing that while 62% of respondents believed tech companies wield too much power and influence, only 17% expressed confidence in the government’s ability to regulate them effectively8. This skepticism is mirrored across Africa, where efforts by national regulators are often under-resourced and politically constrained. For example, in Nigeria, despite having a National Data Protection Bureau, enforcement of digital rights violations remains inconsistent, further eroding citizen trust in both industry and state oversight. This crisis of confidence highlights the urgent need for more transparent, participatory, and rights-based approaches to AI governance.

Development of AI Models and Human Rights

While ethical AI design emphasizes principles like fairness, accountability, and transparency, these values are not just abstract ideals, they are deeply rooted in international human rights law. Ethics, when not grounded in enforceable legal obligations, risks becoming a checkbox exercise. Human rights frameworks, on the other hand, provide binding standards that demand accountability, redress, and inclusion. As AI becomes increasingly embedded in decisions that affect livelihoods, mobility, and democratic participation, aligning technology development with human rights instruments such as the International Covenant on Civil and Political Rights (ICCPR) and the African Charter on Human and Peoples’ Rights (ACHPR) becomes essential. When ethics are not translated into law and practice, technology can reinforce existing forms of injustice.

The rapid development and deployment of AI models have profound implications for human rights. These technologies are increasingly embedded in decision-making processes in areas such as law enforcement, border control, welfare allocation, and employment, often without adequate oversight or transparency.

AI systems can infringe on the right to privacy, particularly when used for mass surveillance or data collection without consent. For example, facial recognition technology has been deployed in ways that enable real-time tracking of individuals in public spaces, creating chilling effects on free expression and assembly9.

For instance, Ugandan authorities used Huawei-supplied AI-powered facial recognition surveillance systems during the 2021 elections. Reports indicated that the technology was used to identify and track political opponents and protestors, particularly supporters of opposition leader Bobi Wine10. This action violated Article 17 of the International Covenant on Civil and Political Rights (ICCPR) which protects individuals from arbitrary or unlawful interference with their privacy. This resulted in international scrutiny which led to public outcry and reports by investigative journalists. However, Uganda has not enacted comprehensive legislation to regulate such surveillance technologies. Another example, in 2020, during the COVID-19 pandemic, Uganda’s government partnered with telecom firms like MTN and Airtel to track users’ locations using AI-powered data analytics. This was done with minimal transparency and questionable consent. This violated the users’ right to privacy and the right to informed consent in the processing of personal data, a standard emphasized in the African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention). Public backlash and media coverage forced some transparency around data-sharing practices. This prompted debate about the lack of a comprehensive data protection framework in Uganda, which remains underdeveloped.

Discriminatory outcomes are another major concern. Because many AI systems are trained on biased historical data, they can perpetuate or amplify inequalities against already vulnerable groups, including ethnic minorities, migrants, and women. These risks are particularly severe when decisions are automated and not easily contestable11.

Lack of explainability and accountability in complex AI systems also undermines the right to due process. Individuals impacted by automated decisions may not understand how or why a decision was made or know how to challenge it. AlgorithmWatch stresses that the current EU AI regulatory framework does not adequately protect human rights, as it lacks enforceable safeguards for transparency, contestability, and public oversight12.

To protect human rights in the age of AI, the United Nations Office of the High Commissioner for Human Rights and civil society advocates call for legal frameworks that mandate:

  1. Human rights impact assessments (HRIAs) before deployment.

  2. Clear rules on data governance and privacy.

  3. Public transparency obligations for AI systems used in sensitive contexts.

  4. Effective avenues for contesting and appealing algorithmic decisions.

Key Human Rights Challenges Posed by the development of AI models

The key human rights challenges of AI cover a wide range of concerns and highlight the complex relationship between AI and human rights:

i). Erosion of the right to privacy

The right to a private life is threatened by the constant tracking and surveillance that AI systems use for data collection. The lack of transparency about how AI systems operate creates uncertainty for individuals, whose data can reveal not only their interests but also their vulnerabilities. Consequently, an imbalance of power emerges. Companies possess extensive knowledge about users, while users remain uncertain about how their data is used and whose interests it serves13.

ii). Exacerbation of existing discrimination and social inequalities

The right not to be discriminated against is at increased risk because AI systems tend to exacerbate existing social inequalities by targeting already vulnerable groups. Technologies such as facial recognition and language modelling have shown prejudice against racial and ethnic minorities, leading to injustices such as false arrests and accusations. Additionally, the opacity of and interests served by AI systems can perpetuate manipulative practices. Finally, AI systems further marginalise certain groups, such as people who are not digitally literate or people with disabilities14.

iii). Challenges to freedom of expression and information

Freedom of expression and information also face challenges. AI-driven moderation on platforms can inadvertently suppress legitimate forms of expression beyond legal requirements on hate speech and other unlawful forms of expression. Indeed, AI systems struggle to understand context and nuance within speech, and the use of bots introduces new possibilities for abuse. Moreover, systems that have algorithms with addictive designs or that create echo chambers, such as some social media platforms, can influence the ability to freely make choices and decisions without coercion or manipulation. Ultimately, this impacts on democratic participation and the free flow of information15.

iv). Threats to transparency, accountability and effective remedy

When private sector technology is used, including in the public sphere, questions of accountability and transparency may arise due to the secretive nature of these algorithms. Members of the public often lack insight into the decision-making processes and therefore struggle to challenge outcomes. This hinders their right to be heard and the right to an effective remedy16.

v). Structural harms affecting human dignity

Moreover, in addition to the specific human rights challenges mentioned above, AI has a profound effect on human dignity in general. Structural harm from surveillance technologies erodes concepts such as human autonomy, human agency, self-governance and self-determination. At the same time, emotional recognition technologies risk dehumanizing individuals by reducing them to data points detached from their inherent worth and dignity17.  

vi). Collective and societal level harms

The invisible and unpredictable nature of human rights harms from AI systems poses complex challenges. While these harms may not always be observable individually, their cumulative effect can have a significant impact on societies as a whole. It is difficult for individuals to perceive the existence of biased systems, as their impact can only be seen when looking at statistical aggregates and distributions. This highlights the need for human rights considerations throughout the entire lifecycle of AI systems, from design to deployment18

Conclusion

As artificial intelligence increasingly gets embedded in and controls critical processes ranging from law enforcement and employment to healthcare and digital identity creation, a wide gap is growing between technological development and rights-based governance. In the face of such challenges, many jurisdictions, including some African countries, have attempted to address the issue through data protection legislation and national strategies for AI. However, such frameworks often lack the speed, enforcement capacity, and inclusivity to address AI’s actual impacts. Violations of human rights remain very serious, including privacy violations through mass surveillance, algorithmic discrimination against marginalized groups, and undermining both transparency and due process and structural harms obstructing human dignity and democratic participation. The ethical commitments laid down in policies fail to be practiced, thus leaving individuals and communities at the mercy of technologies that are obscured and unaccountable.

This gap in ethics and social impact has to be closed by coordinating all the stakeholders through appropriate structures. Laws should be flexible to quick changes in technology and must be enforceable, at least in theory, with provisions for independent oversight and public transparency. From developers through tech firms, whatever senior and strategy leaders exist, they must keep the ethics and human rights track in their own angle of their AI lifecycle, not just as a band-aid for compliance, but as a prime responsibility in itself. Civil society, academia, and the communities that are affected should be playing an actual role within participatory and inclusive processes for AI governance. For Africa in particular, the onus is on ensuring that AI systems here are not carried in with foreign biases, nor imposed without local context, but instead, are built and governed to empower communities and uphold equity and basic rights. Africa can therefore lead a global transition for AI that is innovative, but at the same time just, responsible, and human-centered.

1 Centre for Data Ethics and Innovation, ‘The roadmap to an effective AI assurance ecosystem’ (Independent report, 8 December 2021) https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem accessed 3 July 2025

2 House of Lords Liaison Committee, AI in the UK: No Room for Complacency (7th Report, Session 2019–21, HL Paper 196, 18 December 2020) https://publications.parliament.uk/pa/ld5801/ldselect/ldliaison/196/196.pdf accessed 3 July2025

3 Organisation for Economic Co-operation and Development, Advancing Accountability in AI: Governing and Managing Risks Throughout the Lifecycle for Trustworthy AI (OECD Digital Economy Papers No 3492, February 2023) https://www.oecd.org/en/publications/advancing-accountability-in-ai_2448f04b-en.html accessed 3 July2025

4 Equality and Human Rights Commission, ‘Artificial intelligence: checklist for public bodies in England’ (Guidance, 1 September 2022) https://www.equalityhumanrights.com/guidance/artificial-intelligence-checklist-public-bodies-england accessed 3 July2025

5 Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (Picador 2018)

6 Information Commissioner’s Office, ‘Guidance on AI and Data Protection’ (updated 15 March 2023) https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ accessed 3 July 2025

7 African Digital Democracy Observatory, ‘Early Detection and Countering Hate Speech During the 2022 Kenyan Elections’ (Disinfo.Africa, 24 August 2022) https://disinfo.africa/early-detection-and-countering-hate-speech-during-the-2022-kenyan-elections-e0f183b7bdd1 accessed 16 July 2025

8 Pew Research Center, ‘Americans’ Views of Technology Companies’ (29 April 2024) https://www.pewresearch.org/internet/2024/04/29/americans-views-of-technology-companies-2/ accessed 3 July 2025

9 UN Office of the High Commissioner for Human Rights, The right to privacy in the digital age: report on artificial intelligence and human rights (A/HRC/48/31, 15 September 2021) https://www.ohchr.org/en/documents/thematic-reports/ahrc4831-right-privacy-digital-age-report-united-nations-high accessed 3 July 2025.

10 Joe Parkinson, Nicholas Bariyo and Josh Chin, ‘Huawei Technicians Helped African Governments Spy on Political Opponents’ The Wall Street Journal (Kampala, 15 August 2019) https://www.wsj.com/articles/huawei-technicians-helped-african-governments-spy-on-political-opponents-11565793017 accessed 16 July 2025

11 AlgorithmWatch, EU’s AI Act fails to set gold standard for human rights (3 April 2024) https://algorithmwatch.org/en/ai-act-fails-to-set-gold-standard-for-human-rights/ accessed 3 July 2025.

12 AlgorithmWatch, Upcoming Commission Guidelines on the AI Act Implementation: Human Rights and Justice Must Be at Their Heart (16 January 2025) https://algorithmwatch.org/en/statement-commission-guidelines-ai-act/ accessed 3 July 2025.

13 European Network of National Human Rights Institutions (ENNHRI), Key Human Rights Challenges in the Development of Artificial Intelligence (ENNHRI, 2021) https://ennhri.org/ai-resource/key-human-rights-challenges/ accessed 3 July 2025.

14 ibid

15 ibid

16 ibid

17 ibid

18 ibid

Leave a Comment

Your email address will not be published. Required fields are marked