Global Developments in AI Regulation and Possible Impact on AI Regulation in Africa

Global Developments in AI Regulation and Possible Impact on AI Regulation in Africa

Artificial Intelligence (AI) has introduced a new era of technological progress, especially as the world grapples with generative AI, causing a shift towards addressing how far AI can go. As global discussions on AI ethics and governance intensify, examining the landscape of AI regulation and its potential impact on Africa is crucial. As debates on AI regulation continue to grow, the main areas of contention are the balance between innovation and responsibility, governance and accountability, and the ethical implications of deploying AI.1 In this dynamic landscape, we must consider where Africa stands and how the evolving regulatory frameworks might influence the continent.

AI technologies have redefined industries and prompted a collective introspection on the ethical implications of unleashing intelligent systems into society. From Europe’s pioneering efforts to the varied approaches in Asia and the ongoing dialogues in the United States, the global community is shaping the rules that will govern the AI-driven future.2 This blog post highlights the current international developments in AI regulation, exploring nations’ nuanced approaches to balance AI’s benefits with the imperative to safeguard fundamental human values. With a keen eye on Africa’s potential impact, we consider how the continent can harness these global insights to carve its path in AI governance.

Global Developments in AI Regulation- Focus on EU and US

Europe’s Regulatory Landscape – EU AI Act

European nations have been at the forefront of AI regulation, aiming to strike a balance between fostering innovation and protecting citizens’ rights.3 The European Parliament passed the European Union Artificial Intelligence Act (EU AI Act) on March 13, 2024. This marked the official passing of the landmark law by Members of the European Parliament.4

The EU AI Act, the first-ever comprehensive legal framework on AI worldwide, aims to shape Europe’s digital future by fostering trustworthy AI within and beyond Europe. The Act addresses risks specifically created by AI applications, prohibits AI practices that pose unacceptable risks, determines a list of high-risk applications, sets clear requirements for AI systems for high-risk applications, defines specific obligations for deployers and providers of high-risk AI applications, requires conformity assessment before a given AI system is put into service or placed on the market, puts enforcement in place after a given AI system is deployed into the market overall the Act establishes a governance structure at European and national level.5 The EU AI Act takes a risk-based approach. The risk-based approach categorizes AI systems into four levels of risk: banned, high risk, limited risk, and minimal or no risk.

  • Banned: AI systems threaten people’s safety, livelihoods, and rights. This includes practices like government social scoring or voice assistance with toys that encourage dangerous behaviour.

  • High Risk: High-risk AI systems are subject to strict obligations before being placed on the market. These obligations include adequate risk assessment and mitigation systems, high-quality datasets, activity logging for traceability, detailed documentation, transparent information to deployers, human oversight measures, and robustness, security, and accuracy requirements. Remote biometric identification systems are considered high-risk, with strict requirements in place.

  • Limited Risk: Limited risk refers to risks associated with a lack of transparency in AI usage. The AI Act introduces transparency obligations to ensure that humans are informed when necessary, fostering trust. For example, humans should know they interact with machines using AI systems like chatbots. Providers must ensure that AI-generated content is identifiable, especially in matters of public interest.

  • Minimal or No Risk: Minimal-risk AI applications, such as AI-enabled video games or spam filters, are allowed free use. Most AI systems currently used in the EU fall into this category, with minimal or non-existent risks.

Although the regulation is met with positive commentary throughout the global AI landscape, some experts have equally criticised it. These criticisms derive from concerns highlighting the lack of requirements for AI users to justify their decisions or provide rights for objections, the reliability of self-assessment by companies for compliance, the burden on non-EU companies to appoint representatives, the absence of mandates for explainability and quality assurance in AI systems, legal uncertainties in high-risk areas, and the focus on market placement definitions leading to ambiguity in liability and accountability.6 These concerns question the full proof of effectiveness and practical implementation limitations of the EU AI Act in regulating AI technologies.

Civil Society Organizations such as Article 19 have also noted potential negative impacts of the EU AI Act, questioning its effectiveness in safeguarding human rights in artificial intelligence. These include the Act’s failure to establish comprehensive accessibility requirements for low and medium-risk AI systems, potentially overlooking the needs of individuals with disabilities. Loopholes in transparency obligations for the private sector and security agencies allow them to evade certain requirements, undermining public scrutiny. The Act’s self-regulatory risk classification framework, influenced by industry lobbying, raises doubts about its ability to assess and address high-risk AI systems accurately.7

Limited transparency for law enforcement and migration authorities poses accountability challenges in critical areas susceptible to rights violations. Inadequate fundamental rights impact assessments and the lack of mandatory stakeholder engagement further weaken the Act’s ability to prevent human rights violations. Additionally, the Act’s double standards for human rights outside the EU create risks of rights violations in non-EU countries. These identified shortcomings collectively highlight significant deficiencies in the EU AI Act’s capacity to uphold fundamental human rights in developing and deploying artificial intelligence technologies.8

United States AI Regulatory Approach

The regulatory framework for AI is evolving in the United States. The United States has been actively taking steps to regulate artificial intelligence (AI) in various sectors. Efforts have been made at the federal and state levels to address AI technology’s challenges and opportunities. The U.S. government has enacted several AI-related laws at the federal level over the past few Congresses.9 These laws include standalone legislation and provisions within broader acts to drive AI research, development, and evaluation activities across federal science agencies. Notable among these laws is the National Artificial Intelligence Initiative Act of 2020, which established the American AI Initiative and provided guidance on AI-related activities. Additionally, specific acts like the AI in Government Act and the Advancing American AI Act have mandated certain agencies to lead AI programs and policies within the federal government. The introduction of numerous AI-relevant bills in Congress demonstrates a growing focus on AI legislation, with some bills already enacted and others pending consideration.10

In January 2023, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights, emphasizing the importance of ethical AI development. The National Institute of Standards and Technology(NIST) also released an AI Risk Management Framework to guide risk assessment and management in AI applications. It launched the Trustworthy and Responsible AI Resource Center on March 30, 2023, which would facilitate the implementation of an international alignment. Furthermore, President Biden’s executive order on the ‘Safe, Secure, and Trustworthy Development and Use of AI’ issued in October 2023 outlines comprehensive policies across eight key areas (Technology, Policy, Managerial, Procurement, Regulatory, Ethical, Governance, Legal fields). The United States is actively shaping the regulatory landscape for AI, focusing on balancing innovation and economic benefits with the need for ethical and responsible AI development and deployment.11 The United States is actively shaping the regulatory landscape for AI, focusing on balancing innovation and economic benefits; much like the European Union, they are championing the need for ethical and responsible AI development and deployment.

The African AI Regulatory Landscape

Regulation of AI in Africa is consequently gaining traction with the need to have the African voice in conversations on AI policy and actively participate in shaping AI policy that best reflects the African AI landscape considering AI utilizations, developmental factors and AI deployment for Africa. AI regulation has seemingly been characterized by an influx in National AI strategies, with countries like Rwanda and Senegal joining Egypt and Mauritius, which have long since had national AI strategies. Other African countries like Nigeria, Ghana, and Kenya are making strides toward developing their national AI strategies.12

Regionally, the African Union(AU) has recognized the growth in AI utilization in the continent and considerations of regulations that need to be put in place that are aligned with the African Charter on Human and Peoples Rights to mitigate potential harms and risks in the use of AI technologies. Policy strides include the adoption of Resolution 473, which addresses the need to study human and people’s rights and artificial intelligence (AI), robotics, and other new and emerging technologies in Africa. The resolution comes off the backdrop of the 2019 Sharm El Sheikh declaration, which put a particular focus on the African Digital Transformation Strategy (DTS) 2020-2030, which called for the establishment of a working group on Artificial Intelligence (AI) to study and develop a common African stance on AI, capacity-building frameworks, and an AI think tank to assess and recommend projects aligned with Agenda 2063 and Sustainable Development.13

The AU -AI Continental Strategy is set to be the primary guiding instrument in regulating AI at a regional level, which is likely to further the development of national AI strategies and regulatory frameworks for AI. The EU AI strategy seeks to guide AI-driven socio-economic development and high-quality African data gathering, processing, and interpretation.14 In anticipation of the strategy, the African Union Development Agency (AUDA-NEPAD) published the AUDA-NEPAD Continental Strategy Road Map for Africa and a white paper, Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 on 29 February 2024.15

The roadmap provides a strategic framework for African countries to develop, adopt, and utilize artificial intelligence (AI) technologies to address critical challenges, promote socio-economic development, and overcome structural barriers hindering the growth of a robust AI ecosystem. On the other hand, the white paper, which has similarities to the overall objective of the road map, provides guidance and recommendations for African countries on how to harness the potential of artificial intelligence (AI) in a responsible and sustainable manner. It emphasizes the importance of creating an enabling environment for AI deployment, promoting self-regulation, fostering AI business growth and entrepreneurship, and aligning AI initiatives with the goals of the AU Agenda 2063.

Specific to AI regulation, the roadmap and white paper emphasize the importance of creating an enabling environment for AI deployment, promoting self-regulation, fostering AI business growth and entrepreneurship, and aligning AI initiatives with the goals of the AU Agenda 2063.

Implications of Global AI Regulation to Africa

Whereas there are benefits to leveraging global insights on AI regulation to shape its path in the AI ecosystem, reliance on global insights and frameworks for AI regulation may limit the autonomy of African countries in shaping their regulatory approaches tailored to their unique socio-economic and technological contexts. Additionally, differences in ethical perspectives and cultural norms between regions pose challenges in adopting global ethical guidelines, potentially leading to disparities in implementing AI policies in Africa. With the formulation of policy at its nascent stages, the continent is still grappling with fully understanding the extent to which AI can be utilized while considering socio-economic constraints, digital disparities and the creation of infrastructure while still exploring innovation capabilities. The existence of foundational laws, such as data protection laws and subsidiary regulations, forms a basis for AI regulation. However, it is still noted that, in the race to regulate AI, African may not be entirely ready to regulate AI. This does not negate the need for African perspectives to be considered in global discussions that shape AI regulation and regulatory processes. International developments in AI regulation may inform African nations in navigating the complexities of governing artificial intelligence. Drawing from the experiences should not presuppose a Brussels effect but should inform how Africa can position itself at the forefront of responsible AI development, fostering innovation that uplifts communities and respects human rights, reflecting the dynamics and nuances of the African AI ecosystem.

Image is from

1 Robert Bergman, ‘The Paradox of AI Regulation: Navigating Uncharted Terrains.’ (Mediate , September 2023)<>

2 ‘AI Act: Shaping Europe’s digital future,’ (European Commission, March 2024) (

3 The Ethics of Artificial Intelligence: Issues and initiatives (European Parliamentary Research Services, 2020)<>

4 ibid

5 ibid

6: Marcin Szczepański, ‘United States Approach to Artificial Intelligence (European Parliamentary Research Service, January 2024).< >

7 ibid

8 ibid

9 Okolo, C.T., Aruleba, K., Obaido, G. (2023). Responsible AI in Africa:Challenges and Opportunities. In: Eke, D.O., Wakunuma, K., Akintoye, S. (eds) Responsible AI in Africa. Social and Cultural Studies of Robots and AI. Palgrave Macmillan, Cham.

10 Fanny Vainionpää, Karin Väyrynen, Arto Lanamäki, ‘A Review of Challenges and Critiques of the European Artificial Intelligence Act.(Researchgate, 2023)

11 ‘Artificial Intelligence is at the core of discussions in Rwanda as the AU High-Level Panel on Emerging Technologies convenes experts to draft the AU-AI Continental Strategy.’(AUDA-NEPAD, 29 March 2024)<,of%20AI%20in%20various%20industries.>

12 ibid

13 ‘Taking A Continental Leap Towards A Technologically Empowered Africa At The AUDA-NEPAD AI Dialogue.’ (AUDA-NEPAD, 8 March 2024)


14 ibid

Leave a Comment

Your email address will not be published. Required fields are marked