Subscribe to our weekly newsletters for free

Subscribe to an email

If you want to subscribe to World & New World Newsletter, please enter
your e-mail

Defense & Security
President Donald Trump Speaks During Cabinet Meeting in the Cabinet Room of the White House, Washington, DC on December 2, 2025

Opinion – The Mearsheimer Logic Underlying Trump’s National Security Strategy

by Mark N. Katz

The recently released Trump Administration’s National Security Strategy (NSS) has upended what had been the decades-long consensus about American foreign policy. Most notable in it is the Trump Administration’s prioritization of the Western Hemisphere as an American security concern, its deemphasis on defending America’s traditional European allies, its identification of China as far more of a threat than Russia, and its determination not to be drawn into conflicts in the Middle East and Africa. But while the 2025 Trump Administration National Security Strategy breaks with much of previous American foreign policy, the logic behind it is not something completely new. Even though the document makes no mention of him, the policy outlined in the NSS comports with what John Mearsheimer described in his influential book, “The Tragedy of Great Power Politics”, which was first published in 2001 and updated in 2014. In his book Mearsheimer declared that no nation has ever achieved global hegemony. According to Mearsheimer, America is the only country that has achieved predominant influence in its own region (the Western Hemisphere) and has also been able to prevent any other great power from dominating any other region. Mearsheimer wrote, “States that achieve regional hegemony seek to prevent great powers in other regions from duplicating their feat. Regional hegemons, in other words, do not want peers” (2014 edition, p. 41). Trump’s 2025 National Security Strategy has, whether knowingly or not, adopted these aims as well. It discusses the various regions of the world in the order of their priority for the Trump Administration: the Western Hemisphere first, followed by Asia (or Indo-Pacific), Europe, the Middle East, and lastly Africa. With regard to the Western Hemisphere, the NSS unambiguously calls for the restoration of “American preeminence in the Western Hemisphere,” and states, “We will deny non-Hemispheric competitors the ability to position forces or other threatening capabilities, or to own or control strategically vital assets, in our Hemisphere.” This is very much in keeping with what Mearsheimer described as America being a regional hegemon in the Western Hemisphere. As for the other four regions of the world, though, the Trump Administration seeks either to prevent any other great power from becoming predominant — or it doesn’t see this as a possibility that needs to be worried about. According to the NSS, the Middle East was a priority in the past because it was the world’s most important energy supplier and was a prime theater of superpower conflict. Now, however, there are other energy suppliers (including the U.S.) and superpower competition has been replaced by “great power jockeying” in which the U.S. retains “the most enviable position.” In other words: the Trump Administration does not see any other great power as able to become predominant in this region which is now less strategically important than it used to be anyway. Similarly, the NSS does not see any other great power as even seeking to become predominant in Africa. The NSS thus sees America’s main interests there as mainly commercial. By contrast, China is seen as a threat in the Indo-Pacific region. The NSS, though, discusses Chinese threats in the economic and technological spheres before turning to the military one. A continued U.S. military presence in the region is seen as important for preventing Chinese predominance. But Japan, South Korea, Taiwan, and Australia are all enjoined by the NSS to increase their defense spending in order to counter this threat. The NSS also identifies “the potential for any competitor to control the South China Sea” as a common threat that not only requires investment in U.S. military capabilities, “but also strong cooperation with every nation that stands to suffer, from India to Japan and beyond.” Unlike the Middle East and Africa, then, the NSS does identify a rival great power as striving for predominance in the Indo-Pacific region. Countering it, though, is not seen as just being America’s responsibility, but also that of other powerful states in the region. The strangest section in the 2025 NSS is the one on Europe. While acknowledging that “many Europeans regard Russia as an existential threat,” the NSS envisions America’s role as “managing European relations with Russia” both to “reestablish conditions of strategic stability” and “to mitigate the risk of conflict between Russia and European states.” This is very different from the decades-long U.S. policy of seeing America’s role as defending democratic Europe against an expansionist Soviet Union in the past and Putin’s Russia more recently. Indeed, the NSS’s claim that the European Union undermines “political liberty and sovereignty” and its welcoming “the growing influence of patriotic European parties” (in other words, anti-EU right wing nationalist ones) suggests that it is not Russia which the Trump Administration sees as a rival, but the European Union. The 2025 NSS does call for a “strong Europe…to work in concert with us to prevent any adversary from dominating Europe.” The NSS, though, seems to envision the European Union as either greater than or equal to Russia in threatening to dominate European nations. In his book, Mearsheimer did not envision the European Union as a potential great power rival to the U.S. Indeed, there isn’t even an entry for it in the book’s index. The way that the NSS envisions the world, though, comports with how Mearsheimer described America’s great power position: predominant in the Western Hemisphere and able to prevent any other great power from becoming predominant in any other region of the world. Mearsheimer, though, is a scholar who described the position in the world that he saw the U.S. as having achieved and which would seek to maintain. The 2025 NSS, by contrast, is a policy document laying out how the Trump Administration believes it can best maintain this position. And there is reason to doubt that it has done so realistically. Keeping non-Hemispheric great powers out of the Western Hemisphere will not be easy when there are governments there that want to cooperate with them. Further, devoting American resources to being predominant in Latin America when this will be resented and resisted could not only take away from America’s ability to prevent rival great powers from becoming predominant in other regions, but could counterproductively lead Latin American nations than have already done so to increase their cooperation with external great powers which the Trump Administration wants to avoid. Further, the Trump Administration’s efforts to reduce the influence of the European Union runs two risks: the first is that such an effort will succeed, but that the rise of anti-EU nationalist governments throughout the old continent results in a Europe less able to resist Russian manipulation and incursion. The second is that Trump Administration efforts to weaken the European Union backfire and result not only in a Europe united against American interference but unnecessarily emerging as a rival to the U.S. It would be ironic indeed if pursuing the NSS’s plan for upholding what Mearsheimer described as America’s ability to predominate over the Western Hemisphere combined with an ability to prevent any rival from predominating over any other region ended up undermining America’s ability to do either.

Defense & Security
Soldier in engineering role uses AI application on laptop to manage server hub systems. Army commander reviews secret intelligence information using artificial intelligence in data center, camera A

Dual-Use AI Technologies in Defense: Strategic Implications and Security Risks

by Mayukh Dey

Introduction Artificial intelligence has become a critical technology in the 21st century, with applications spanning healthcare, commerce, and scientific research. However, the same algorithms that enable medical diagnostics can guide autonomous weapons, and the same machine learning systems that power recommendation engines can identify military targets. This dual-use nature, where technologies developed for civilian purposes can be repurposed for military applications, has positioned AI as a central element in evolving global security dynamics. The strategic implications are substantial. China views AI as essential for military modernization, with the People's Liberation Army planning to deploy "algorithmic warfare" and "network-centric warfare" capabilities by 2030 (Department of Defense, 2024). Concurrently, military conflicts in Ukraine and Gaza have demonstrated the operational deployment of AI-driven targeting systems. As nations allocate significant resources to military AI development, a critical question emerges: whether the security benefits of dual-use AI technologies can be realized without generating severe humanitarian consequences. The Reversal Commercial Innovation Driving Military Modernization Historically, military research and development drove technological innovation, with civilian applications emerging as secondary benefits, a phenomenon termed "spin-off." The internet, GPS, and microwave ovens all originated in defense laboratories. This dynamic has reversed. Commercially developed technologies now increasingly "spin into" the defense sector, with militaries dependent on technologies initially developed for commercial markets. This reversal carries significant implications for global security. Unlike the Cold War era, when the United States and Soviet Union controlled nuclear weapons development through state programs, AI innovation occurs primarily in private sector companies, technology firms, and university research institutions. Organizations like DARPA influence global emerging technology development, with their projects often establishing benchmarks for research and development efforts worldwide (Defense Advanced Research Projects Agency, 2024). This diffusion of technological capacity complicates traditional arms control frameworks based on state-controlled military production. The scale of investment is considerable. The U.S. Department of Defense's unclassified AI investments increased from approximately $600 million in 2016 to about $1.8 billion in 2024, with more than 685 active AI projects underway (Defense One, 2024). China's spending may exceed this figure, though exact data remains unavailable due to the opacity of Chinese defense budgeting. Europe is pursuing comparable investments, with the EU committing €1.5 billion to defense-related research and development through initiatives like the European Defence Fund. Dual-Use Applications in Contemporary Warfare AI's military applications span the spectrum of warfare, from strategic planning to tactical execution. Current deployments include: Intelligence, Surveillance, and Reconnaissance (ISR): AI systems process large volumes of sensor data, satellite imagery, and signals intelligence to identify patterns beyond human analytical capacity. In 2024, "China's commercial and academic AI sectors made progress on large language models (LLMs) and LLM-based reasoning models, which has narrowed the performance gap between China's models and the U.S. models currently leading the field," enabling more sophisticated intelligence analysis (Department of Defense, 2024). Autonomous Weapons Systems: Autonomous weapons can identify, track, and engage targets with minimal human oversight. In the Russia-Ukraine war, drones now account for approximately 70-80% of battlefield casualties (Center for Strategic and International Studies, 2025). Ukrainian officials predicted that AI-operated first person view drones could achieve hit rates of around 80%, compared to 30-50% for manually piloted systems (Reuters, 2024). Predictive Maintenance and Logistics: The U.S. Air Force employs AI in its Condition-Based Maintenance Plus program for F-35 fighters, analyzing sensor data to predict system failures before occurrence, reducing downtime and operational costs. Command and Control: AI assists military commanders in processing battlefield information and evaluating options at speeds exceeding human capacity. Project Convergence integrates AI, advanced networking, sensors, and automation across all warfare domains (land, air, sea, cyber, and space) to enable synchronized, real-time decision-making. Cyber Operations: AI powers both offensive and defensive cyber capabilities, from automated vulnerability discovery to malware detection and sophisticated social engineering campaigns. Gaza and Ukraine: AI in Contemporary Conflict Recent conflicts have provided operational demonstrations of AI's military applications and associated humanitarian costs. Israel's Lavender system reportedly identified up to 37,000 potential Hamas-linked targets, with sources claiming error rates near 10 percent (972 Magazine, 2024). An Israeli intelligence officer stated that "the IDF bombed targets in homes without hesitation, as a first option. It's much easier to bomb a family's home" (972 Magazine, 2024). The system accelerated airstrikes but also contributed to civilian casualties, raising questions about algorithmic accountability. The system's design involved explicit tradeoffs: prioritizing speed and scale over accuracy. According to sources interviewed by 972 Magazine, the army authorized the killing of up to 15 or 20 civilians for every junior Hamas operative that Lavender marked, while in some cases more than 100 civilians were authorized to be killed to assassinate a single senior commander (972 Magazine, 2024). Foundation models trained on commercial data lack the reasoning capacity humans possess, yet when applied to military targeting, false positives result in civilian deaths. Data sourced from WhatsApp metadata, Google Photos, and other commercial platforms created targeting profiles based on patterns that may not correspond to combatant status. Ukraine has implemented different approaches, using AI to coordinate drone swarms and enhance defensive capabilities against a numerically superior adversary. Ukrainian Deputy Defense Minister Kateryna Chernohorenko stated that "there are currently several dozen solutions on the market from Ukrainian manufacturers" for AI-augmented drone systems being delivered to armed forces (Reuters, 2024). Ukraine produced approximately 2 million drones in 2024, with AI-enabled systems achieving engagement success rates of 70 to 80 percent compared to 10 to 20 percent for manually controlled drones (Center for Strategic and International Studies, 2025). Both sides in the conflict have developed AI-powered targeting systems, creating operational arms race dynamics with immediate battlefield consequences. Civilian Harm: Technical and Legal Limitarions The integration of AI into lethal military systems raises humanitarian concerns extending beyond technical reliability. AI's inability to uphold the principle of distinction, which requires protecting civilians by distinguishing them from combatants in compliance with international humanitarian law, presents fundamental challenges. Current AI systems lack several capabilities essential for legal warfare:  Contextual Understanding: AI cannot comprehend the complex social, cultural, and situational factors that determine combatant status. A person carrying a weapon might be a combatant, a civilian defending their home, or a shepherd protecting livestock.  Proportionality Assessments: International humanitarian law requires that military attacks not cause disproportionate civilian damage. Human Rights Watch noted that it is doubtful whether robotic systems can make such nuanced assessments (Human Rights Watch, 2024).  Moral Judgment: Machines lack the capacity for compassion, mercy, or understanding of human dignity, qualities that have historically provided safeguards against wartime atrocities.  Accountability: With autonomous weapon systems, responsibility is distributed among programmers, manufacturers, and operators, making individual accountability difficult to establish. As one expert observed, "when AI, machine learning and human reasoning form a tight ecosystem, the capacity for human control is limited. Humans have a tendency to trust whatever computers say, especially when they move too fast for us to follow" (The Conversation, 2024). The risks extend to specific populations. Autonomous weapons systems trained on data predominantly consisting of male combatants in historical records could create algorithmic bias. In the case of Lavender, analysis suggests "one of the key equations was 'male equals militant,'" echoing the Obama administration's approach during drone warfare operations (The Conversation, 2024). Communities of color and Muslim populations face heightened risks given historical patterns of discriminatory force deployment. Export Controls and Technology Transfer Challenges Recognizing AI's strategic importance, governments have implemented export control regimes. The U.S. Bureau of Industry and Security now requires licenses for exports of advanced computing chips and AI model weights, imposing security conditions to safeguard storage of the most advanced models. These controls face inherent tensions. Overly broad restrictions risk hampering legitimate research and commercial innovation. Analysis suggests that if AI technology is too extensively controlled, American universities may face difficulties performing AI research, resulting in a less robust U.S. AI ecosystem. Insufficient controls enable adversaries to acquire cutting-edge capabilities. The effectiveness of export controls remains uncertain. In 2024, hundreds of thousands of chips, totaling millions of dollars, were smuggled into China through shell companies, varying distributors, and mislabeling techniques (Oxford Analytica, 2025). China's DeepSeek models, which achieved performance approaching U.S. systems, were reportedly trained on chips that circumvented export restrictions. International Governance: Fragmentation and Competing Frameworks The international community has struggled to develop coherent governance frameworks for dual-use AI. Rather than a cohesive global regulatory approach, what has emerged is a collection of national policies, multilateral agreements, high-level summits, declarations, frameworks, and voluntary commitments. Multiple international forums have addressed AI governance: ● The UN Secretary-General created an AI Advisory Board and called for a legally binding treaty to prohibit lethal autonomous weapons systems without human control, to be concluded by 2026 ● The Group of Governmental Experts on Lethal Autonomous Weapons Systems has held discussions under the Convention on Certain Conventional Weapons since 2013, with limited concrete progress ● NATO released a revised AI strategy in 2024, establishing standards for responsible use and accelerated adoption in military operations ● The EU's AI Act, adopted in 2023, explicitly excludes military applications and national security from its scope This fragmented landscape reflects geopolitical divisions. The perceived centrality of AI for competition has led the U.S. to position itself as leader of ideologically aligned countries in opposition to China, including for security purposes. China promotes its own governance vision through initiatives like the Belt and Road, exporting technology standards alongside infrastructure. Strategic Stability Implications AI creates strategic stability challenges. Autonomous weapons enable substitution of machines for human soldiers in many battlefield roles, reducing the human cost and thus political cost of waging offensive war. This could increase the frequency of conflicts between peer adversaries, each believing they can prevail without significant domestic casualties. For conflicts between non-peer adversaries, reduced casualties further diminish domestic opposition to wars of aggression. The implications extend beyond conventional warfare. Armed, fully-autonomous drone swarms could combine mass harm with lack of human control, potentially becoming weapons of mass destruction comparable to low-scale nuclear devices. The technical barriers to such systems are declining as components become commercially available. AI also complicates nuclear stability. Advances in AI-enhanced sensors and data processing could undermine second-strike capabilities by improving detection of mobile missile launchers and submarines. This erosion of assured retaliation could incentivize first strikes during crises. Simultaneously, AI systems managing nuclear command and control create risks of accidents, miscalculations, or unauthorized launches. Ethical Framework Limitations The integration of AI into warfare strains traditional ethical frameworks. Just War Theory requires that combatants maintain moral responsibility for their actions, possess the capacity to distinguish combatants from civilians, and apply proportionate force. Automation bias and technological mediation weaken moral agency among operators of AI-enabled targeting systems, diminishing their capacity for ethical decision-making. When operators interact with targeting through screens displaying algorithmic recommendations rather than direct observation, psychological distance increases. This mediation risks transforming killing into a bureaucratic process. The operator becomes less a moral agent making decisions and more a technician approving or rejecting algorithmic suggestions. Furthermore, industry dynamics, particularly venture capital funding, shape discourses surrounding military AI, influencing perceptions of responsible AI use in warfare. When commercial incentives align with military applications, the boundaries between responsible innovation and reckless proliferation become unclear. Companies developing AI for civilian markets face pressure to expand into defense contracting, often with insufficient ethical deliberation. Conclusion Dual-use AI technologies present both opportunities and risks for international security. One trajectory leads toward normalized algorithmic warfare at scale, arms races in autonomous weapons that erode strategic stability, and inadequate international governance resulting in civilian harm. An alternative trajectory involves international cooperation that constrains the most dangerous applications while permitting beneficial uses. The timeframe for establishing governance frameworks is limited. AI capabilities are advancing rapidly, and widespread proliferation of autonomous weapons will make policy reversal substantially more difficult. The challenge resembles nuclear non-proliferation but unfolds at greater speed, driven by commercial incentives rather than state-controlled programs. Because AI is a dual-use technology, technical advances can provide economic and security benefits. This reality means unilateral restraint by democratic nations would cede advantages to authoritarian competitors. However, uncontrolled competition risks adverse outcomes for all parties. Concrete action is required from multiple actors. States must strengthen multilateral agreements through forums like the UN Convention on Certain Conventional Weapons to establish binding restrictions on autonomous weapons without meaningful human control. NATO and regional security alliances should harmonize AI ethics standards and create verification mechanisms for military AI deployments. Military institutions must implement mandatory human-in-the-loop requirements for lethal autonomous systems and establish clear chains of accountability for AI-driven targeting decisions. Technology companies developing dual-use AI systems bear responsibility for implementing ethical safeguards and conducting thorough threat modeling before commercial release. Industry alliances should establish transparency standards for military AI applications and create independent audit mechanisms. Universities and research institutions must integrate AI ethics and international humanitarian law into technical training programs. Export control regimes require coordination between the United States, EU, and allied nations to prevent regulatory arbitrage while avoiding overreach that stifles legitimate research. Democratic governments should lead by demonstrating that military AI can be developed within strict ethical and legal constraints, setting standards that distinguish legitimate security applications from destabilizing weapons proliferation. As Austrian Foreign Minister Alexander Schallenberg observed, this represents the Oppenheimer moment of the current generation, recognizing that dual-use AI, like nuclear weapons, represents a technology whose military applications demand collective restraint. The policy choices made in the next few years will have long-term consequences. They will determine whether AI becomes a tool for human advancement or an instrument of algorithmic warfare. The technology exists; the policy framework remains to be established. The actors are identified; the question is whether they possess the political will to act before proliferation becomes irreversible. References 972 Magazine (2024) 'Lavender': The AI machine directing Israel's bombing spree in Gaza. https://www.972mag.com/lavender-ai-israeli-army-gaza/ Center for Strategic and International Studies (2024) Where the Chips Fall: U.S. Export Controls Under the Biden Administration from 2022 to 2024. https://www.csis.org/analysis/where-chips-fall-us-export-controls-under-biden-administration-2022-2024 Center for Strategic and International Studies (2025) Ukraine's Future Vision and Current Capabilities for Waging AI-Enabled Autonomous Warfare. https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare Defense One (2023) The Pentagon's 2024 Budget Proposal, In Short. https://www.defenseone.com/policy/2023/03/heres-everything-we-know-about-pentagons-2024-budget-proposal/383892/ Department of Defense (2024) Military and Security Developments Involving the People's Republic of China 2024. https://media.defense.gov/2024/Dec/18/2003615520/-1/-1/0/MILITARY-AND-SECURITY-DEVELOPMENTS-INVOLVING-THE-PEOPLES-REPUBLIC-OF-CHINA-2024.PDF Foreign Policy Research Institute (2024) Breaking the Circuit: US-China Semiconductor Controls. https://www.fpri.org/article/2024/09/breaking-the-circuit-us-china-semiconductor-controls/ Human Rights Watch (2024) A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making. https://www.hrw.org/report/2025/04/28/a-hazard-to-human-rights/autonomous-weapons-systems-and-digital-decision-making National Defense Magazine (2024) Pentagon Sorting Out AI's Future in Warfare. https://www.nationaldefensemagazine.org/articles/2024/10/22/pentagon-sorting-out-ais-future-in-warfare Queen Mary University of London (2024) Gaza war: Israel using AI to identify human targets raising fears that innocents are being caught in the net. https://www.qmul.ac.uk/media/news/2024/hss/gaza-war-israel-using-ai-to-identify-human-targets-raising-fears-that-innocents-are-being-caught-in-the-net.html Reuters (2024) Ukraine rolls out dozens of AI systems to help its drones hit targets. https://euromaidanpress.com/2024/10/31/reuters-ukraine-rolls-out-dozens-of-ai-systems-to-help-its-drones-hit-targets/