Responsible and ethical AI in Africa: Core issues and current regulatory landscape
Challenge description
Governance of the AI is the key element in terms of successful development in this field. Currently, African countries are at the preliminary stage of AI regulatory frameworks enforcement.
According to the AI Risk Management Framework, among the risks deriving from the use AI systems are: harm to people, which includes individual, societal and group harm, such as discrimination, violation of rights and physical or mental safety, reduction in economic opportunities and access to education; harm to organisation, such as threats to business operations and reputation, and security breaches resulting in financial losses; harm to ecosystem: damages to resources, supply chains, and environment.
The Framework states that AI owners and actors can achieve the risk tolerance through implementation of legal or regulatory mechanisms.
UNESCO has found that 18 African countries have enacted the process of development of guidelines for AI governance: 13 African countries have already enacted AI strategies, with 4 of them mentioning AI in their 4IR or emerging technologies strategies. In 5 countries AI frameworks are under development. Relevant authorities and commissions have been established in 13 countries.
No country has specific AI legislation, except for Mauritius, which has partial AI legislation enacted in 2021.
As of April 2023, Egypt and Mauritius were the only countries in Africa which have implemented a strategy dedicated to AI. 10 African countries, namely Morocco, Algeria, Tunisia, Ghana, Benin, Nigeria, Ethiopia, Kenya, Rwanda and South Africa are currently developing their national strategies.
Based on the OECD Database, several countries have enacted a variety of initiatives aimed at AI guidance and regulation. Consequently, Egypt accounts for the majority of related initiatives (8), while Nigeria has 2 initiatives, being the lowest number on the list. Among other countries are Morocco (4), Tunisia (7), Kenya (6), Uganda (3) and South Africa (4). Tunisia, Egypt, and South Africa have AI-related initiatives that mandate the AI activities to be under the regulatory oversight and ethical advice bodies.

Ethical principles
Ethics and governance of AI are the key elements in terms of successful development in this field. According to UNESCO, AI systems are likely to create biases, exacerbate climate change, as systems training requires computing power and electricity, which leads to carbon dioxide emissions. The technology can also violate human rights. Hence, AI can exacerbate the issue of discrimination and can threaten the minority groups. For instance, in 2019 it was found that AI systems designed for hate speech detection are twice more likely to defy tweets of African-Americans as offensive.
Additionally, biassed data is often used in algorithms development, including loan decision-making, and arises racial and gender discrimination. Historical data records indicate that the majority of low interest rate loans were given to white females and males. The tendency is still ongoing, as displays the 2019 case of First National Bank (FNB) of South Africa.
In South Africa, the banking sector is largely implementing robotics and AI technology. Nowadays the historical data is being used for AI training, and perpetuates inequalities.
This also arises the issue of false positives and false negatives regarding the use of AI. In 2019, the National Institute of Standards and Technology (NIST) conducted a research on face recognition algorithms which revealed that highest false positive results are found after the processing of data of West and East African and East Asian people, while Eastern European people’s data is subject to the lowest false positive results. Furthermore, false negatives regarding the use of AI systems by law enforcement agencies were found to be higher in Africans and Caribbean-born people.
Developing and putting the ethical principles of AI in place is a necessity, as such issues as transparency, data bias, surveillance, as well as gender inequalities, and discrimination can be exacerbated without proper provisions for AI systems.
The ethical provisions landscape in Africa is quite nascent. Yet, a number of countries have made some progress in this field. The Republic of Congo, Sao Tome and Principe, and Zimbabwe have ethical guidelines for AI. In 2023, Egypt launched the Egyptian Charter for Responsible AI.
In 2019, OECD adopted the Principles on AI with the aim to promote the respect of human rights and democratic values. Transparency, explainability, security and accountability are stated among the core principles. At the moment, Egypt is the only African country that is committed to OECD AI Principles.
Following the Responsible Artificial Intelligence Policies and Regulations in Africa: The gaps and a way forward policy brief, there is no agreement on what AI accounts for and how to coin the Responsible AI principles both among the consumers and developers in Africa. By virtue of unclear mentioning of artificial intelligence, risks of overlooking the principles of responsible AI in the policy development phase are emerging.
AU Initiatives
In 2014, the African Union developed the Convention on Cyber Security and Personal Data Protection, also known as the Malabo Convention, which entered into force in 2023. The Convention includes provisions on AI in 2 of its articles. It regulates the automated processing of personal data, and mandates the rights of individuals not to be subject to a decision which is only based on automated processing of data.
In 2023, the African Union and the African Union Development Agency (AUDA-NEPAD) were drafting the African Union Artificial Intelligence (AU-AI) Continental Strategy. The proclaimed aim is to develop a strategy that would serve as the guidelines for African countries in development of national AI policy frameworks. The public draft version was announced to be launched in January 2024, however it is not available yet.
Furthermore, the AU is currently working on the African Continental Free Trade Area Protocol on Digital Trade. According to Microsoft, the document regulates the use of AI and calls on safe and responsible use of emerging technologies in general.
Data Protection
Generally, the data privacy legislation landscape in Africa is at the development stage. As the majority of African countries lack specific AI legislation, with Mauritius being the only country that has partial legislation, AI and other automated decision-making (ADM) systems are not regulated properly. Hence, these systems are being regulated by existing data protection laws.
Following the 2022 AU Data Policy Framework, 32 African countries have enacted some form of personal data protection regulation. As of 2022, 30 countries have provisions for ADM in their data protection laws, while Seychelles and Tanzania lack such requirements. Nonetheless, the degree to which ADM is protected and regulated alters largely. The State of AI in Africa 2023 Report outlines that legislation of South Africa, Nigeria, Ghana, and Kenya can be suitable for the governance of AI technology.
ADM must be regulated due to ethical principles and cybersecurity and privacy risks. Among them are biassed data and algorithms that do not consider diversities, thus affecting the minorities. Data breaches and privacy, transparency and manipulation of decision-making are also common risks regarding ADM. As the systems use large data sets, the data could be stolen, hacked, or be a subject to surveillance. On the other hand, intransparency in the ADM algorithms can lead to inconsistent decision-making.
Nevertheless, not all the countries of the continent have data protection legislation in place. Tanzania, Namibia, Eswatini, Malawi and Ethiopia have enacted draft legislation. Legislation is unavailable in Libya, Sudan, Eritrea, Central African Republic, Burundi, Guinea-Bissau, Sierra Leone and Liberia. Additionally, as the AI Governance in Africa discloses, data protection legislation of 14% of the countries does not have provisions on automated decision-making.
A number of data protection laws of African countries do not have requirements for Data Protection Impact Assessment (DPIA). DPIA helps to identify and reduce risks regarding personal data protection, and is essential for securing information during the use of ADM systems.
According to the findings in the Automated Decision-Making Policies in Africa policy brief, the laws which require DPIA during ADM are South Africa’s Protection of Personal Information Act (POPIA) enacted in 2020, Nigeria Data Protection Act of 2023, Ghana Data Protection Act of 2012, the Data Protection Act of 2019 and the General Regulations of 2021 of Kenya.
According to the policy brief, the existing data protection laws which address the ADM call for transparency and accountability in ADM systems. The laws stress the rights of data subjects, and demand data controllers and processors to deliver the information on the use of the data during the ADM to its owners.
Solutions
- Implementation of data protection laws with special provisions concerning ADM, such as Data Protection Impact Assessments (DPIA) when using ADM. DPIA creates a layer of protection for data that is subject to AI and other ADM systems;
- Enacting National AI Policies for sufficient governance of emerging technologies;
- Aligning with the international principles on AI ethics. In 2017, the Institute of Electrical and Electronics Engineers (IEEE) Global Initiative on Ethical Considerations in AI and Autonomous Systems (AS) was launched. For instance, it sets a standard for Personal Data AI Agent, which implies that AI systems ought to work along with a human-in-the-loop AI agent, which refers to human engagement in testing and training of AI algorithms, to ensure safety and respect to individual rights. Furthermore, in 2021, UNESCO published its Recommendation on the Ethics of Artificial Intelligence, a comprehensive guideline on AI ethics adopted by all 193 member states. The recommendations review the role of aftermaths which derive from the implementation of AI. Among them are gender and ethical biases, environmental harm, violation of human rights and freedom, etc. UNESCO establishes recommendations on the use of AI and sets 10 principles of a “human rights approach to AI”. It has also set out two methodologies, the Readiness Assessment Methodology (RAM) and the Ethical Impact Assessment (EIA) for the member states to assess their readiness for AI implementation.
- Building capacity and developing the research on AI in order to ameliorate regulatory frameworks and raise awareness on AI among the consumers and developers;
- Providing sufficient funding for research centres and policy making;
- According to the Responsible Artificial Intelligence Policies and Regulations in Africa: The Gaps and A Way Forward policy brief, a conceptual framework for responsible AI in Africa, that considers the continental landscape and new generations should be coined. A dedicated framework could become a basis for future development of policies and regulations, bearing in mind local needs.