Table of Contents Show
Abstract
The integration of artificial intelligence (AI) into military operations is transforming modern warfare, presenting both opportunities and risks especially in conflict-affected areas where human security challenges persist. In Nigeria, where security threats from Boko Haram insurgency, inter-communal violence, and banditry present complex challenges, AI holds the potential to enhance military capabilities and improve operational efficiency. However, its deployment also raises critical concerns about unintended consequences, including biases, loss of human oversight, and the escalation of conflict if unchecked. This duality underscores the urgent need for carefully crafted policies tailored to managing the risks of AI while ensuring ethical application in military operations. Based on available literature, this research finds that Nigeria faces a critical gap in AI regulation, with current military policies insufficiently addressing the ethical and human security implications of autonomous weapons systems in military operations. Therefore, this research explores policy solutions required to manage these risks and safeguard human security in the Nigerian military. This research is interdisciplinary as it interrogates extant literature and empirical evidence from strategic and war studies, legal and public policy, as well as technology applications. Methodologically, the research employs a qualitative approach, involving review of existing policies and empirical exploration of case study analysis using the Nigerian military operations in the northeast. Primary data is to be acquired through interviews with military and AI ethics experts to understand how AI is currently utilised in Nigerian military operations and the challenges it poses. The research aims to provide a nuanced understanding and a balanced approach to AI integration in military operations, while offering policy solutions to the ethical applications of AI in military context.
Keywords: Artificial intelligence, military operations, human security, public policy, Nigerian military
Introduction
The deployment of artificial intelligence (AI) in military operations marks a significant shift in modern warfare and internal security strategies, especially in states experiencing irregular warfare. In Nigeria, the use of AI technologies such as surveillance and combat drones is increasingly being integrated into counterinsurgency (COIN) and counterterrorism (CT) operations against groups such as Boko Haram and the Islamic State’s West Africa Province (ISWAP) in the North East, as well as banditry in the North West (Obe 2022; Ndukwe and Olumide 2023). While these technologies enhance military capabilities and efficiency, their deployment amid weak regulatory frameworks, poor transparency and accountability mechanisms, and a history of human rights violations by security forces raises serious ethical, legal, and humanitarian concerns. Despite these critical concerns, literature on AI and military ethics focuses on the regulatory frameworks and doctrinal policies of Western countries such as those of the United States Department of Defence, NATO, and the European Union (Scharre 2018; Boulanin and Verbruggen 2017). There appears to be a paucity of scholarship on the deployment of AI in African military operational environments, particularly in Nigeria, despite the country’s active use of AI technologies in military operations (Aghedo and Eke 2020). Most importantly, there appears to be an absence of a coherent policy or legal framework to regulate the application of AI in military operations, or any targeted legislative or ethical guidelines for military operations.
This study interrogates this regulatory vacuum by exploring viable policy solutions for managing the risks of AI in military operations in Nigeria, while safeguarding human security. This study investigates the implications of Nigeria’s increasing adoption of AI-enabled military technologies for military operations. It examines the significant policy vacuum surrounding the deployment of military AI in Nigeria and seeks to propose practical policy solutions grounded in international humanitarian law, AI ethics, and accountability. The study adopts a multidimensional lens, drawing from fields including international humanitarian law, security studies, public policy and technology. By doing so, the paper contributes to an emerging body of literature that bridges the gap between the use of emerging technologies and the imperative of protecting civilian populations in African contexts.
Methodology
This study employs a qualitative research design through an interpretivist lens to explore the risks posed by the deployment of AI in Nigerian military operations and the policy frameworks necessary to safeguard human security. The primary research question guiding this study is: how can Nigeria manage the risks of AI in military operations through effective policy frameworks to protect human security? The research adopts AI as a subject of inquiry rather than a tool of analysis by critically interrogating what policy framework could be targeted at safeguarding human security in AI-driven military operations. This choice reflects the study’s concern with normative and institutional issues such as legality, ethics, and the implications of human security.
Semi-structured interviews with 15–20 key informants, including senior military officers, AI ethics scholars, and policymakers, will uncover how AI tools are currently being deployed in military operations in Nigeria and what challenges arise in the absence of specific legal and ethical frameworks. Participants will be selected based on expertise and experience, using purposive sampling to reach key respondents. Secondary sources, including Nigerian military strategy documents and reports from oversight institutions such as the National Human Rights Commission, as well as international bodies like the UN, International Committee of the Red Cross, and Amnesty International, will complement primary data. The data collected will be analysed using thematic content analysis to identify recurring patterns, and key themes will be generated to analyse the data. NVivo or similar qualitative software might be used to code and organise large volumes of textual data, which will enhance transparency and analytical rigour.
Given the sensitive nature of military operations and national security, all research will comply with ethical standards related to informed consent, confidentiality, and risk mitigation. Interview participants will be briefed about the study’s purpose and are allowed to withdraw at any stage. Anonymity will be maintained where required. Ethics clearance will be obtained from the appropriate institutional board prior to data collection if required.
Expected Outcomes and Findings
The study will develop a comprehensive and context-sensitive framework for managing the risks of AI in military contexts—one that prioritises civilian protection, ethical oversight, and democratic accountability. In doing so, the study will fill a significant knowledge gap in the intersection of AI, military practice, and human rights in Nigeria. It will specifically reveal the absence of a national defence AI policy and the lack of institutional safeguards for preventing the misuse of AI technologies in military operations in Nigeria. These findings will be critical in demonstrating that Nigeria’s adoption of AI in military operations is currently unregulated, opaque, and susceptible to both misuse and ethical violations, thus undermining both human security and operational legitimacy.
The research also intends to map out policy entry points, such as legislative oversight, ethical review mechanisms for the armed forces, that can serve as foundational steps towards a national AI defence governance framework. The study will prescribe how Nigeria can align with global best practices as proposed by NATO, the EU, or the United States Department of Defence, while tailoring these approaches to its domestic security and governance context.
Interdisciplinary Implications
This study bridges critical disciplinary boundaries by intersecting the fields of artificial intelligence, security studies, human rights law, ethics, and public policy. It contributes to the expanding field of science and technology by exploring how emerging technologies interact with social and political systems, especially in fragile contexts like Nigeria, where governance structures and security institutions are in a delicate balance.
Conclusion
This study illuminates the complex and underexplored terrain of AI in Nigerian military operations, particularly through the lens of human security, ethical accountability, and government policy. From the findings, a central generalisation can be drawn: Nigeria’s adoption of AI in defence remains ahead of its institutional, legal, and ethical readiness. While the Nigerian military has begun integrating AI tools in CT and COIN activities, there exists a significant gap in regulatory oversight and human rights safeguards.
The study therefore concludes that AI’s militarisation, in the absence of robust policy frameworks, poses serious risks to civilian safety, democratic governance, and operational legitimacy. It also shows that human-centred and context-specific policies are urgently needed to mitigate these risks and ensure that technological advancement does not outpace normative safeguards. Broadly, the study offers a framework for integrating AI into military operations in ways that are accountable, culturally responsive, and aligned with Nigeria’s constitutional and international obligations.
Acknowledgements
I acknowledge the invaluable support and guidance I received throughout this fellowship. My deepest gratitude goes to my mentors, Dr Chinasa and Mr Nelson, whose intellectual insights, critical feedback, and encouragement significantly shaped the direction and depth of this study. Their expertise in the field of AI governance was instrumental in refining my conceptual framework and methodological approach. I am also grateful to the facilitators and coordinators of the fellowship for tutoring us in the best way possible. Their openness enabled unlimited access to resources, and their dedication to fostering interdisciplinary research greatly enriched my experience. Special appreciation is extended to other fellows and navigators who contributed to the pleasurable learning experience. Lastly, I acknowledge the institutional support provided by ResearchRound, whose research infrastructure, learning environment, and access to scholarly materials were essential to the successful completion of the fellowship.
References
Aghedo, I., & Eke, S. J. (2020). The use and misuse of intelligence technology in Nigeria’s counterinsurgency war. African Security Review, 29(3), 225–240.
Boulanin, V., & Verbruggen, M. (2017). Mapping the development of autonomy in weapon systems. SIPRI.
Obe, A. (2022). Nigeria’s digital surveillance and the shrinking civic space. Open Society Foundations Report.
Ndukwe, I., & Olumide, A. (2023). Surveillance technology in Nigeria: Security priorities and human rights risks. Centre for Digital Policy Studies.
Scharre, P. (2018). Army of None: Autonomous Weapons and the Future of War. W. W. Norton & Company.