News Analysis – By Abdul Lauya
As nations worldwide race to legislate the use, risks, and promise of artificial intelligence (AI), Nigeria stands at a critical juncture. Policymakers, academics, and technologists alike are asking: is the country ready, in infrastructure, intent, and institutional capacity, to craft a coherent, enforceable national AI policy?
While some commentators have credited Professor Kayode Adebowale, Vice Chancellor of the University of Ibadan, with stating that Nigeria is “ripe for a national policy on artificial intelligence,” no verifiable public record confirms this direct quote. Nonetheless, the idea reflects a growing consensus among experts who argue that Nigeria must no longer delay in establishing legal and ethical guardrails for AI deployment.
Currently, Nigeria has no stand-alone AI legislation. However, sectoral policies and laws, including the Nigeria Data Protection Act (2023), the Cybercrimes Act (2015), and emerging guidelines from the Securities and Exchange Commission (SEC) and Nigerian Communications Commission (NCC), indirectly touch on AI-related concerns. Meanwhile, the National Information Technology Development Agency (NITDA) and the National Centre for Artificial Intelligence and Robotics (NCAIR) have made strides through research support, stakeholder consultations, and a draft National Artificial Intelligence Strategy (NAIS), released in 2024.
But strategy, no matter how robust, lacks the authority and enforceability of law.
In response, the National Assembly has introduced a Bill for an Act to establish a National Artificial Intelligence and Robotics Agency. A consolidated version of House Bills 601 and 942 passed second reading in December 2024. The bill’s aim is to provide a statutory framework for AI regulation, ethics enforcement, and national innovation. However, the proposed agency’s scope, potential regulatory overlaps, and funding model have drawn criticism. Legal analysts warn that unless the bill is sharpened and harmonized with existing bodies like NITDA, it risks becoming a symbolic gesture rather than a functional oversight mechanism.
Globally, the regulatory bar is rising. The European Union has passed its Artificial Intelligence Act, the world’s first comprehensive law governing AI based on risk classification. High-risk applications, such as facial recognition and AI in healthcare, must meet strict compliance standards. Canada and the UK emphasize public accountability, while the U.S. has pivoted toward executive-led governance through AI safety and rights directives. In Africa, the African Union is moving toward a continental AI governance protocol, with countries like Egypt, South Africa, and Rwanda actively developing their own national strategies.
Nigeria, by contrast, remains at the consultation stage. Without a binding law, AI development remains largely unregulated, leaving loopholes in data protection, intellectual property, algorithmic bias, and cybersecurity. As generative AI becomes more sophisticated, the risks of deepfakes, misinformation, automated fraud, and electoral interference rise, posing real threats to national security and democratic institutions.
Experts argue that Nigeria’s readiness should not be measured solely by technological maturity, but by political will, legislative clarity, and institutional coordination. In that sense, the country is as ripe as it is vulnerable.
To move forward, Nigeria must expedite the passage of its AI Commission Bill, with clear mandates and coordination frameworks. A risk-based regulatory model, inspired by the EU but adapted to Nigeria’s context, should guide implementation. This includes human oversight for high-impact AI systems, algorithmic audits, and penalties for misuse.
Equally crucial is capacity building. Regulatory agencies, courts, and law enforcement must be equipped with the knowledge and tools to interpret and enforce AI-related rules. Nigeria should also embrace regulatory sandboxes, controlled environments where innovators can test AI products under the supervision of regulators. Partnerships with UNESCO, the UN Global Digital Compact, and African tech policy hubs can offer additional support and benchmarking.
The deployment of AI in Nigeria must be not only inclusive and ethical, but also resilient to misuse. Electoral technology, biometric data systems, and AI-driven surveillance tools need to be shielded from manipulation. Public trust will hinge on transparency, accountability, and clear rules for both state and private actors.
So, is Nigeria ripe for a national AI policy? The answer is yes, but ripeness does not mean readiness without action. The time to legislate, coordinate, and secure the country’s AI future is now. Delay, as global precedent shows, only increases the costs, in innovation, public safety, and national credibility.
Editor’s Note: This analysis is part of our “Digital Nigeria” series. For more stories on emerging tech and policy, visit our website: www.eyereporters.com