
2025-07-14
What Should Kenya’s AI Laws Actually Cover?
Introduction
In our earlier piece titled “Striking a Balance: Kenya’s AI Strategy (2025–2030) and What It Means for You,” we explored the ambitions of Kenya’s National Artificial Intelligence (AI) Strategy (the Strategy) published in March 2025, which outlined the government’s roadmap for establishing ethical, inclusive, and innovative AI regulatory and institutional frameworks. The Strategy acknowledges a fundamental truth, there are currently no specific laws or regulations in Kenya that directly regulate AI. As AI transforms from a futuristic concept to present-day reality, it is therefore prudent to establish a robust AI legal and institutional framework in Kenya. This second episode delves into laying some legal groundwork on what Kenya’s AI legal and institutional framework should cover.
Current Legal Framework
While Kenya does not have AI-specific legislation, several existing laws attempt to address some aspects of AI use and address issues related to AI, thus offering a foundation on which a more coherent and robust legal regime can be built. A discussion of some of these laws follows below.
- The Data Protection Act, 2019
The Data Protection Act is Kenya’s primary legislation governing the processing of personal data, and while it does not explicitly mention AI, it lays down several principles that directly impact how AI systems are used, especially those that relate to personal data.
The Act requires that consent for data processing be informed, specific, and freely given (Section 32). For AI systems that collect or analyze personal data, this means users must clearly understand what data is being collected, how it will be used and whether it will be processed by automated systems.
Additionally, section 35 of the Act gives data subjects the right not to be subject to a decision based solely on automated processing, including profiling, where such decisions produce legal effects that significantly affect them. This is particularly relevant in regulating AI applications and systems used in for instance credit scoring, job recruitment, or access to services. The law here aims to ensure fairness, transparency, and accountability in a world where algorithms are making more decisions that impact people’s rights and opportunities.
- The Computer Misuse and Cybercrimes Act, 2018
While the Kenya’s Computer Misuse and Cybercrimes Act does not expressly address artificial intelligence in its current form, the Act provides for offences relating to computer systems; to enable timely and effective detection, prohibition, prevention, response, investigation, and prosecution of computer and cybercrimes.
Under the Act, several offences, even though not explicitly targeted at AI, are particularly relevant to AI-powered systems and misuse. These include unauthorized access to computer systems (Section 14), unauthorized interference to a computer system, program or data (Section 16), the dissemination of false publications or misinformation (Sections 22 and 23), which could involve AI-generated content, and identity theft and impersonation (Section 29), especially where AI tools are used to simulate human behavior. These provisions help regulate the ethical use of AI and mitigate the risks posed by malicious or deceptive use.
- The Consumer Protection Act, 2012
The Consumer Protection Act, 2012, though silent on express reference to AI, lays a legislative foundation for addressing AI-related concerns, particularly in ensuring that AI-powered products and services uphold consumer rights and provide accurate information in an increasingly digital marketplace.
Part III of the Act prohibits false, misleading, or unconscionable representations in consumer transactions, which is relevant to AI-powered tools that may use manipulative algorithms to influence consumer behavior. In this regard, the Act provides a framework to ensure consumers are not misled by AI powered tools.
Key Regulatory Focus Areas in AI Regulation
As discussed above, while some of the existing laws in Kenya provide foundational legal groundwork and protection, they were not designed with the unique characteristics and challenges of AI in mind. So, what areas should an AI regulatory governance framework focus on?
- Risk-based Classification
As Kenya moves toward establishing a regulatory governance framework for AI, one area of focus is the categorization of AI systems based on risk. AI technologies vary a lot depending on their functionalities and capabilities, for instance there are task-specific AI and more advanced human-like AI technologies. In this respect, it is therefore prudent to have in place a regulatory response which is proportionate to the level of risk each system poses to individuals, institutions, and national interests.
The Strategy acknowledges this need and recommends a framework that not only ensures the responsible and ethical development and use of AI systems but also oversight mechanisms, risk management, and accountability measures.
Further, we consider incorporating a risk-based classification approach to the regulatory framework being in line with global best practices. For instance, the European Union AI Act provides a benchmark in risk-based governance by categorizing AI systems into four categories: unacceptable risk (for AI systems deemed a threat to people’s safety or fundamental rights), high risk (for AI systems that are considered high risk, not restricted but are subject to strict requirements before they are made available for use to the public), limited risk(for AI systems that have moderate risk and are subject to transparency obligations, where users must be informed they are interacting with an AI system), and minimal risk(applies to AI systems that are not subject to mandatory regulation, but developers are encouraged to follow ethical guidelines in its use)[1].
This risk-based classification model provides safeguards, reserving strict obligations and compliance requirements for high-risk AI technologies (such as those affecting fundamental rights or public safety) and adopting minimal oversight for low-risk applications.
- Key compliance requirements
Developing a robust regulatory governance framework for AI requires the incorporation of clear and enforceable compliance requirements that promote the ethical, transparent, and accountable use of the AI technologies. The Strategy requires that developers and stakeholders in the AI space to comply with and maintain high standards of transparency, accountability, security and privacy measures, risk management capacity and governance of the AI systems. This informs the compliance requirements that the regulatory framework should encompass.
In particular, the regulatory framework should require that AI systems are designed, operated and used in a manner that ensure compliance with transparency, access, accuracy, and accountability principles.
Further, the AI regulatory framework should be aligned with existing legal principles and standards, particularly with respect to data protection, consumer protection, and cybersecurity, thus ensuring that AI systems do not operate in isolation from the broader regulatory requirements already in place.
- Regulatory Agency
Recognizing the need for regulatory oversight, the Strategy proposes mechanisms for testing and validating AI technologies before it is implemented. This highlights the role of a regulatory agency in providing the institutional structure necessary to implement and operationalize compliance requirements, monitor and respond to the evolving AI landscape. To fulfill this role, it would be prudent for the legal framework to establish an AI regulatory agency with AI specific capacity, mandated to set standards, oversee risk assessments, licensing AI systems, and guide ethical and responsible innovation and use of AI technologies across various sectors.
Additionally, the legal framework should empower this regulatory agency with enforcement powers against non-compliant entities, ensuring that accountability is not just a theoretical principle but enforceable.
- Enforcement Actions
The AI legal framework should provide enforcement actions such as fines, suspension of AI systems and revocation of licenses. With the regulator empowered with enforcement powers, such enforcement actions would serve as a deterrent against misuse and promote trust in AI technologies by holding developers accountable for failures such as biased algorithms, lack of transparency, breaches of data privacy or consumer protection.
Conclusion
With the increased development and use of AI technologies, the Strategy indicates Kenya’s keen focus and need to effectively regulate the AI space, considering that while some of the existing laws offer a foundational base, they are not sufficiently tailored to address AI- related issues. The call is to establish an AI-specific legal and regulatory framework that addresses the full lifecycle of AI systems—from development, deployment and use, drawing inspiration from comprehensive models such as the European Union’s Artificial Intelligence Act and South Korea’s Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base[2]. As outlined above, this framework should focus on key areas such as categorization of risk, compliance requirements, institutional oversight, and enforcement mechanisms to ensure the responsible, ethical, and secure use of AI in Kenya.
[1] Butt, J. (2024), Analytical study of the world’s first EU Artificial Intelligence (AI) Act, International Journal of Research Publication and Reviews, 5(3), 7343-7364.
[2] Park, M. S., & Chang, S. D. (2022), Review of Artificial Intelligence Platform Policies and Strategies in South Korea, United States, China and the European Union Using National Innovation Capacity, International Journal of Knowledge Content Development & Technology, 12(3), 79-99.
Esther Omulele , Margret Muiruri