Defense & Security
Dual-Use AI Technologies in Defense: Strategic Implications and Security Risks
Image Source : Shutterstock
Subscribe to our weekly newsletters for free
If you want to subscribe to World & New World Newsletter, please enter
your e-mail
Defense & Security
Image Source : Shutterstock
First Published in: Jan.19,2026
Jan.19, 2026
Artificial intelligence has become a critical technology in the 21st century, with applications spanning healthcare, commerce, and scientific research. However, the same algorithms that enable medical diagnostics can guide autonomous weapons, and the same machine learning systems that power recommendation engines can identify military targets. This dual-use nature, where technologies developed for civilian purposes can be repurposed for military applications, has positioned AI as a central element in evolving global security dynamics. The strategic implications are substantial. China views AI as essential for military modernization, with the People's Liberation Army planning to deploy "algorithmic warfare" and "network-centric warfare" capabilities by 2030 (Department of Defense, 2024). Concurrently, military conflicts in Ukraine and Gaza have demonstrated the operational deployment of AI-driven targeting systems. As nations allocate significant resources to military AI development, a critical question emerges: whether the security benefits of dual-use AI technologies can be realized without generating severe humanitarian consequences.
Historically, military research and development drove technological innovation, with civilian applications emerging as secondary benefits, a phenomenon termed "spin-off." The internet, GPS, and microwave ovens all originated in defense laboratories. This dynamic has reversed. Commercially developed technologies now increasingly "spin into" the defense sector, with militaries dependent on technologies initially developed for commercial markets. This reversal carries significant implications for global security. Unlike the Cold War era, when the United States and Soviet Union controlled nuclear weapons development through state programs, AI innovation occurs primarily in private sector companies, technology firms, and university research institutions. Organizations like DARPA influence global emerging technology development, with their projects often establishing benchmarks for research and development efforts worldwide (Defense Advanced Research Projects Agency, 2024). This diffusion of technological capacity complicates traditional arms control frameworks based on state-controlled military production. The scale of investment is considerable. The U.S. Department of Defense's unclassified AI investments increased from approximately $600 million in 2016 to about $1.8 billion in 2024, with more than 685 active AI projects underway (Defense One, 2024). China's spending may exceed this figure, though exact data remains unavailable due to the opacity of Chinese defense budgeting. Europe is pursuing comparable investments, with the EU committing €1.5 billion to defense-related research and development through initiatives like the European Defence Fund.
AI's military applications span the spectrum of warfare, from strategic planning to tactical execution. Current deployments include: Intelligence, Surveillance, and Reconnaissance (ISR): AI systems process large volumes of sensor data, satellite imagery, and signals intelligence to identify patterns beyond human analytical capacity. In 2024, "China's commercial and academic AI sectors made progress on large language models (LLMs) and LLM-based reasoning models, which has narrowed the performance gap between China's models and the U.S. models currently leading the field," enabling more sophisticated intelligence analysis (Department of Defense, 2024). Autonomous Weapons Systems: Autonomous weapons can identify, track, and engage targets with minimal human oversight. In the Russia-Ukraine war, drones now account for approximately 70-80% of battlefield casualties (Center for Strategic and International Studies, 2025). Ukrainian officials predicted that AI-operated first person view drones could achieve hit rates of around 80%, compared to 30-50% for manually piloted systems (Reuters, 2024). Predictive Maintenance and Logistics: The U.S. Air Force employs AI in its Condition-Based Maintenance Plus program for F-35 fighters, analyzing sensor data to predict system failures before occurrence, reducing downtime and operational costs. Command and Control: AI assists military commanders in processing battlefield information and evaluating options at speeds exceeding human capacity. Project Convergence integrates AI, advanced networking, sensors, and automation across all warfare domains (land, air, sea, cyber, and space) to enable synchronized, real-time decision-making. Cyber Operations: AI powers both offensive and defensive cyber capabilities, from automated vulnerability discovery to malware detection and sophisticated social engineering campaigns.
Recent conflicts have provided operational demonstrations of AI's military applications and associated humanitarian costs. Israel's Lavender system reportedly identified up to 37,000 potential Hamas-linked targets, with sources claiming error rates near 10 percent (972 Magazine, 2024). An Israeli intelligence officer stated that "the IDF bombed targets in homes without hesitation, as a first option. It's much easier to bomb a family's home" (972 Magazine, 2024). The system accelerated airstrikes but also contributed to civilian casualties, raising questions about algorithmic accountability. The system's design involved explicit tradeoffs: prioritizing speed and scale over accuracy. According to sources interviewed by 972 Magazine, the army authorized the killing of up to 15 or 20 civilians for every junior Hamas operative that Lavender marked, while in some cases more than 100 civilians were authorized to be killed to assassinate a single senior commander (972 Magazine, 2024). Foundation models trained on commercial data lack the reasoning capacity humans possess, yet when applied to military targeting, false positives result in civilian deaths. Data sourced from WhatsApp metadata, Google Photos, and other commercial platforms created targeting profiles based on patterns that may not correspond to combatant status. Ukraine has implemented different approaches, using AI to coordinate drone swarms and enhance defensive capabilities against a numerically superior adversary. Ukrainian Deputy Defense Minister Kateryna Chernohorenko stated that "there are currently several dozen solutions on the market from Ukrainian manufacturers" for AI-augmented drone systems being delivered to armed forces (Reuters, 2024). Ukraine produced approximately 2 million drones in 2024, with AI-enabled systems achieving engagement success rates of 70 to 80 percent compared to 10 to 20 percent for manually controlled drones (Center for Strategic and International Studies, 2025). Both sides in the conflict have developed AI-powered targeting systems, creating operational arms race dynamics with immediate battlefield consequences.
The integration of AI into lethal military systems raises humanitarian concerns extending beyond technical reliability. AI's inability to uphold the principle of distinction, which requires protecting civilians by distinguishing them from combatants in compliance with international humanitarian law, presents fundamental challenges. Current AI systems lack several capabilities essential for legal warfare: Contextual Understanding: AI cannot comprehend the complex social, cultural, and situational factors that determine combatant status. A person carrying a weapon might be a combatant, a civilian defending their home, or a shepherd protecting livestock. Proportionality Assessments: International humanitarian law requires that military attacks not cause disproportionate civilian damage. Human Rights Watch noted that it is doubtful whether robotic systems can make such nuanced assessments (Human Rights Watch, 2024). Moral Judgment: Machines lack the capacity for compassion, mercy, or understanding of human dignity, qualities that have historically provided safeguards against wartime atrocities. Accountability: With autonomous weapon systems, responsibility is distributed among programmers, manufacturers, and operators, making individual accountability difficult to establish. As one expert observed, "when AI, machine learning and human reasoning form a tight ecosystem, the capacity for human control is limited. Humans have a tendency to trust whatever computers say, especially when they move too fast for us to follow" (The Conversation, 2024). The risks extend to specific populations. Autonomous weapons systems trained on data predominantly consisting of male combatants in historical records could create algorithmic bias. In the case of Lavender, analysis suggests "one of the key equations was 'male equals militant,'" echoing the Obama administration's approach during drone warfare operations (The Conversation, 2024). Communities of color and Muslim populations face heightened risks given historical patterns of discriminatory force deployment.
Recognizing AI's strategic importance, governments have implemented export control regimes. The U.S. Bureau of Industry and Security now requires licenses for exports of advanced computing chips and AI model weights, imposing security conditions to safeguard storage of the most advanced models. These controls face inherent tensions. Overly broad restrictions risk hampering legitimate research and commercial innovation. Analysis suggests that if AI technology is too extensively controlled, American universities may face difficulties performing AI research, resulting in a less robust U.S. AI ecosystem. Insufficient controls enable adversaries to acquire cutting-edge capabilities. The effectiveness of export controls remains uncertain. In 2024, hundreds of thousands of chips, totaling millions of dollars, were smuggled into China through shell companies, varying distributors, and mislabeling techniques (Oxford Analytica, 2025). China's DeepSeek models, which achieved performance approaching U.S. systems, were reportedly trained on chips that circumvented export restrictions.
The international community has struggled to develop coherent governance frameworks for dual-use AI. Rather than a cohesive global regulatory approach, what has emerged is a collection of national policies, multilateral agreements, high-level summits, declarations, frameworks, and voluntary commitments. Multiple international forums have addressed AI governance: ● The UN Secretary-General created an AI Advisory Board and called for a legally binding treaty to prohibit lethal autonomous weapons systems without human control, to be concluded by 2026 ● The Group of Governmental Experts on Lethal Autonomous Weapons Systems has held discussions under the Convention on Certain Conventional Weapons since 2013, with limited concrete progress ● NATO released a revised AI strategy in 2024, establishing standards for responsible use and accelerated adoption in military operations ● The EU's AI Act, adopted in 2023, explicitly excludes military applications and national security from its scope This fragmented landscape reflects geopolitical divisions. The perceived centrality of AI for competition has led the U.S. to position itself as leader of ideologically aligned countries in opposition to China, including for security purposes. China promotes its own governance vision through initiatives like the Belt and Road, exporting technology standards alongside infrastructure.
AI creates strategic stability challenges. Autonomous weapons enable substitution of machines for human soldiers in many battlefield roles, reducing the human cost and thus political cost of waging offensive war. This could increase the frequency of conflicts between peer adversaries, each believing they can prevail without significant domestic casualties. For conflicts between non-peer adversaries, reduced casualties further diminish domestic opposition to wars of aggression. The implications extend beyond conventional warfare. Armed, fully-autonomous drone swarms could combine mass harm with lack of human control, potentially becoming weapons of mass destruction comparable to low-scale nuclear devices. The technical barriers to such systems are declining as components become commercially available. AI also complicates nuclear stability. Advances in AI-enhanced sensors and data processing could undermine second-strike capabilities by improving detection of mobile missile launchers and submarines. This erosion of assured retaliation could incentivize first strikes during crises. Simultaneously, AI systems managing nuclear command and control create risks of accidents, miscalculations, or unauthorized launches.
The integration of AI into warfare strains traditional ethical frameworks. Just War Theory requires that combatants maintain moral responsibility for their actions, possess the capacity to distinguish combatants from civilians, and apply proportionate force. Automation bias and technological mediation weaken moral agency among operators of AI-enabled targeting systems, diminishing their capacity for ethical decision-making. When operators interact with targeting through screens displaying algorithmic recommendations rather than direct observation, psychological distance increases. This mediation risks transforming killing into a bureaucratic process. The operator becomes less a moral agent making decisions and more a technician approving or rejecting algorithmic suggestions. Furthermore, industry dynamics, particularly venture capital funding, shape discourses surrounding military AI, influencing perceptions of responsible AI use in warfare. When commercial incentives align with military applications, the boundaries between responsible innovation and reckless proliferation become unclear. Companies developing AI for civilian markets face pressure to expand into defense contracting, often with insufficient ethical deliberation.
Dual-use AI technologies present both opportunities and risks for international security. One trajectory leads toward normalized algorithmic warfare at scale, arms races in autonomous weapons that erode strategic stability, and inadequate international governance resulting in civilian harm. An alternative trajectory involves international cooperation that constrains the most dangerous applications while permitting beneficial uses. The timeframe for establishing governance frameworks is limited. AI capabilities are advancing rapidly, and widespread proliferation of autonomous weapons will make policy reversal substantially more difficult. The challenge resembles nuclear non-proliferation but unfolds at greater speed, driven by commercial incentives rather than state-controlled programs. Because AI is a dual-use technology, technical advances can provide economic and security benefits. This reality means unilateral restraint by democratic nations would cede advantages to authoritarian competitors. However, uncontrolled competition risks adverse outcomes for all parties. Concrete action is required from multiple actors. States must strengthen multilateral agreements through forums like the UN Convention on Certain Conventional Weapons to establish binding restrictions on autonomous weapons without meaningful human control. NATO and regional security alliances should harmonize AI ethics standards and create verification mechanisms for military AI deployments. Military institutions must implement mandatory human-in-the-loop requirements for lethal autonomous systems and establish clear chains of accountability for AI-driven targeting decisions. Technology companies developing dual-use AI systems bear responsibility for implementing ethical safeguards and conducting thorough threat modeling before commercial release. Industry alliances should establish transparency standards for military AI applications and create independent audit mechanisms. Universities and research institutions must integrate AI ethics and international humanitarian law into technical training programs. Export control regimes require coordination between the United States, EU, and allied nations to prevent regulatory arbitrage while avoiding overreach that stifles legitimate research. Democratic governments should lead by demonstrating that military AI can be developed within strict ethical and legal constraints, setting standards that distinguish legitimate security applications from destabilizing weapons proliferation. As Austrian Foreign Minister Alexander Schallenberg observed, this represents the Oppenheimer moment of the current generation, recognizing that dual-use AI, like nuclear weapons, represents a technology whose military applications demand collective restraint. The policy choices made in the next few years will have long-term consequences. They will determine whether AI becomes a tool for human advancement or an instrument of algorithmic warfare. The technology exists; the policy framework remains to be established. The actors are identified; the question is whether they possess the political will to act before proliferation becomes irreversible.
972 Magazine (2024) 'Lavender': The AI machine directing Israel's bombing spree in Gaza. https://www.972mag.com/lavender-ai-israeli-army-gaza/ Center for Strategic and International Studies (2024) Where the Chips Fall: U.S. Export Controls Under the Biden Administration from 2022 to 2024. https://www.csis.org/analysis/where-chips-fall-us-export-controls-under-biden-administration-2022-2024 Center for Strategic and International Studies (2025) Ukraine's Future Vision and Current Capabilities for Waging AI-Enabled Autonomous Warfare. https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare Defense One (2023) The Pentagon's 2024 Budget Proposal, In Short. https://www.defenseone.com/policy/2023/03/heres-everything-we-know-about-pentagons-2024-budget-proposal/383892/ Department of Defense (2024) Military and Security Developments Involving the People's Republic of China 2024. https://media.defense.gov/2024/Dec/18/2003615520/-1/-1/0/MILITARY-AND-SECURITY-DEVELOPMENTS-INVOLVING-THE-PEOPLES-REPUBLIC-OF-CHINA-2024.PDF Foreign Policy Research Institute (2024) Breaking the Circuit: US-China Semiconductor Controls. https://www.fpri.org/article/2024/09/breaking-the-circuit-us-china-semiconductor-controls/ Human Rights Watch (2024) A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making. https://www.hrw.org/report/2025/04/28/a-hazard-to-human-rights/autonomous-weapons-systems-and-digital-decision-making National Defense Magazine (2024) Pentagon Sorting Out AI's Future in Warfare. https://www.nationaldefensemagazine.org/articles/2024/10/22/pentagon-sorting-out-ais-future-in-warfare Queen Mary University of London (2024) Gaza war: Israel using AI to identify human targets raising fears that innocents are being caught in the net. https://www.qmul.ac.uk/media/news/2024/hss/gaza-war-israel-using-ai-to-identify-human-targets-raising-fears-that-innocents-are-being-caught-in-the-net.html Reuters (2024) Ukraine rolls out dozens of AI systems to help its drones hit targets. https://euromaidanpress.com/2024/10/31/reuters-ukraine-rolls-out-dozens-of-ai-systems-to-help-its-drones-hit-targets/
First published in :
World & New World Journal
Unlock articles by signing up or logging in.
Become a member for unrestricted reading!