Subscribe to our weekly newsletters for free

Subscribe to an email

If you want to subscribe to World & New World Newsletter, please enter
your e-mail

Defense & Security
Military Think Tank, AI technology in the army. Warfare analytic operator checking coordination of the military team. Military commander with a digital device with vr glasses operating troops.

Artificial Intelligence and International Military Conflicts – the case of war in Ukraine.

by Krzysztof Śliwiński

한국어로 읽기 Leer en español In Deutsch lesen Gap اقرأ بالعربية Lire en français Читать на русском AbstractThis paper draws on rapidly emerging literature on the role of artificial intelligence in military conflicts and warfare as well as its implications for international security. It departs from an assumption that the emerging technology will have a deterministic and potentially transformative influence on military power.This project intends to ascertain the role of autonomous weapons in modern military conflicts. In doing so, it further adds to the recent debates, which take place among scholars, military leaders as well as policy makers around the world regarding the potential for AI to be the source of future instability and a great power rivalry.It is suggested that there is an urgent need to regulate the development, proliferation and usage of autonomous weapons and weapon systems driven by AI before it is too late – namely the AI achieves cognizant skills. 1.DefinitionsEncyclopedia Britannica proposes that artificial intelligence (AI) is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.1Interestingly enough AI defines itself as referring to the simulation of human intelligence processes by machines, particularly computer systems. These processes include learning, reasoning, problem-solving, perception, and language understanding. AI enables machines to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and natural language processing. AI technologies encompass machine learning, neural networks, deep learning, and other advanced algorithms that allow machines to mimic cognitive functions.2In the context of the military, artificial intelligence refers to the utilization of AI technologies and systems to enhance military capabilities, operations, and decision-making processes. Military applications of AI include autonomous weapons systems, drones, cyber defense mechanisms, predictive analytics for strategic planning, and battlefield surveillance. AI can be used to analyze large volumes of data quickly, identify patterns, and make real-time decisions to support military objectives. While AI offers significant advantages in terms of efficiency and precision, there are ethical considerations and concerns regarding the potential risks of autonomous AI systems in warfare.32. AI in the War in Ukraine and Israel vs. Hamas - UkraineThe ongoing war in Ukraine is arguably the first “full scale drone War”, which also employs loitering munitions, autonomous ships, undersea drones for mine hunting and uncrewed ground vehicles being deployed.AI is heavily used in systems that integrate target and object recognition and geospatial intelligence. Analysis of satellite images, geolocating and analysing open-source data such as social media photos in geopolitically sensitive locations. On top of that neural networks are used, for example, to combine ground-level photos, drone video footage and satellite imagery.AI-enhanced facial recognition software has also been used on a substantial scale. AI is playing an important role in electronic warfare and encryption as well as cyber warfare, especially in support of defensive capabilities. Finally, AI has also been employed to spread of misinformation - the use of deep fakes as part of information warfare. The emergence of this new technology has created new actors, private companies further fueling the so-called privatisation of security: Palantir Technologies, Planet Labs, BlackSky Technology and Maxar Technologies are some examples of such.The AI driven systems make a fundamental change in the field so much so that the combined use of aerial and sea drones in the October (2022) attack on Russia’s Black Sea flagship vessel, the Admiral Makarov, was perceived by some analysts as perhaps a new type of warfare.4What makes this conflict unique is the unprecedented willingness of foreign geospatial intelligence companies to assist Ukraine by using AI-enhanced systems to convert satellite imagery into intelligence, surveillance, and reconnaissance advantages. U.S. companies play a leading role in this.These examples illustrate that the current conflict in Ukraine is a testing ground for AItechnology.- Israel vs. HamasIsraeli military says it’s using artificial intelligence to select many of these targets in real-time. The military claims that the AI system, named “the Gospel,” has helped it to rapidly identify enemy combatants and equipment, while reducing civilian casualties.5 Allegedly, multiple sources familiar with the IDF’s (Israel Defensive Forces) targeting processes confirmed the existence of the Gospel, saying it had been used to produce automated recommendations for attacking targets, such as the private homes of individuals suspected of being Hamas or Islamic Jihad operatives. In recent years, the target division has helped the IDF build a database of what sources said was between 30,000 and 40,000 suspected militants. Systems such as the Gospel, they said, had played a critical role in building lists of individuals authorised to be assassinated.6 According to IDF’s own website the use of these tools does not change the obligatory principles and rules in the Intelligence Directorate's SOP and related instructions. They do not generate orders for attacks. They do not generate new intelligence that could not otherwise be accessed by an intelligence analyst. They do not constitute the sole basis for determining targets eligible to attack – regardless of how accurate they are. On the contrary, these tools improve the quality of the intelligence process outcome. They facilitate the accessibility of the analyst to relevant information, and help the analyst be more informed of the most up-to-date and relevant intelligence sources, making the analyses more precise. They reduce the risk of errors that may occur in intelligence analyses.7 3. AI and War As far as the role of AI driven technologies and software is concerned it is probably useful to think about them as the third revolution in warfare. The first one being mostly about gunpowder and the second one being about nuclear weapons.Additionally, one should also bear in mind that AI is closely related to the so-called cyber domain, which in the literature is often referred to as the fifth domain of warfare. (The first one being land, the second being sea, the third being air and the forth being space, as in cosmic space.)While AI and associated technologies hold potential for reducing harms of war if developed and applied responsibly, there exist significant risks of technological escalation, loss of human control and value alignment that demand proactive international cooperation and oversight to guide research and use of these systems. Nonetheless all major powers, including US and China are working nonstop to develop relevant AI driven military systems hoping to achieve potential advantages against each other. These technologies include: machine learning/deep learning applications with military uses like drone/vehicle autonomy, cyber/info warfare and predictive analytics of populations/scenarios.At the same time AI poses novel challenges and escalatory risks that differ from past arms races and call for new frameworks of governance and norms. Autonomous weapons threaten to undermine international humanitarian laws by removing human accountability from targeting - problems of biases, uncertain risks of loss of meaningful human control. Other related risks include preemptive/predictive AI for mass surveillance, social control and information warfare which is likely to erode principles of sovereignty, privacy and consent.It does not take the stretch of imagination to expect a certain level of ‘techno-tyranny’ in the future. Job losses to robotic systems are probably a given and as such risk further politico-economic instability. This consequently calls for just transitions and perhaps even a universal basic income.The Opaque ‘black box’ nature of neural networks hinders verification and accountability, fuelling distrust. Furthermore, there is a potential for accidental or unintentional escalation. Without safeguards and transparency, AI may ultimately serve military-industrial complexes and geopolitical ambitions rather than global security needs.The fast-emerging technology needs to be urgently regulated. International initiatives for AI governance (norms or regimes) will probably have to be introduced by the UN and its technical bodies. These will have to include outcome accountability’ through system design, impact assessments, red lines on certain applications and universal access to benefits.As warns Heidy Khlaaf, Engineering Director of AI Assurance at Trail of Bits, a technologysecurity firm warns “AI algorithms are notoriously flawed with high error rates observed across applications that require precision, accuracy, and safety,”8Reportedly, in a simulation of a military exercise carried out by US Military Force, an AI drone 'killed operator' after going rogue. The robot worked out its controller was stopping it 'completing objectives on test.9In parallel, Chinese scientists create and cage world’s first AI commander in a PLA laboratory. “The highest-level commander is the sole core decision-making entity for the overall operation, with ultimate decision-making responsibilities and authority,”104. AI and International SecurityIn terms of national security-level applications of an AI, one can clearly identify numerous milieu:- MilitaryAI is transforming military operations by enabling autonomous systems, such as drones and robots, to perform tasks that were previously carried out by humans. These systems can be used for surveillance, reconnaissance, target identification, and even combat. AI-powered algorithms can analyze vast amounts of data to provide real-time intelligence, enhance situational awareness, and support decision-making processes on the battlefield- CybersecurityAI is crucial in combating cyber threats, as it can detect and respond to attacks more effectively than traditional security measures. Machine learning algorithms can analyze network traffic patterns, identify anomalies, and detect potential breaches. AI can also help develop predictive models to anticipate future cyber threats and vulnerabilities, allowing organizations to strengthen their defenses proactively.- Intelligence and SurveillanceAI enables intelligence agencies to process and analyze massive volumes of data, including social media feeds, satellite imagery, and communication intercepts. Natural Language Processing (NLP) algorithms can extract valuable insights from unstructured data sources, aiding in counterterrorism efforts, identifying potential threats, and monitoring geopolitical developments.- Decision Support SystemsAI can assist policymakers and military leaders in making informed decisions by providing predictive analysis and scenario modeling. Machine learning algorithms can analyze historical data, identify patterns, and generate forecasts regarding potential  conflicts, resource allocation, or geopolitical developments. This helps in strategic planning and resource optimization.- Autonomous Weapons SystemsThe development of autonomous weapons systems raises ethical concerns and challenges in international security. AI-powered weapons can operate without direct human control, leading to debates about accountability, proportionality, and adherence to international humanitarian law. International efforts are underway to establish regulations and norms governing the use of such systems.- Diplomacy and Conflict ResolutionAI can facilitate diplomatic negotiations and conflict resolution by providing data-driven insights and analysis. Natural Language Processing algorithms can assist in analyzing diplomatic texts, identifying common ground, and suggesting potential compromises. AI can also simulate scenarios and predict the outcomes of different negotiation strategies, aiding diplomats in finding mutually beneficial solutions.- Threat Detection and PreventionAI can enhance early warning systems for various threats, including terrorism, nuclear proliferation, and pandemics. Machine learning algorithms can analyze patterns in data to identify potential risks and predict emerging threats. This enables governments and international organizations to take proactive measures to prevent or mitigate these risks.5. ConclusionIn the world of microelectronics, experts often talk about Moore's law: the principle that the number of transistors on chips doubles every two years, resulting in exponentially more capable devices. The law helps explain the rapid rise of so many technological innovations, including smartphones and search engines.Within national security, AI progress has created another kind of Moore's law. Whichever military first masters organizing, incorporating, and institutionalizing the use of data and AI into its operations in the coming years will reap exponential advances, giving it remarkable advantages over its foes. The first adopter of AI at scale is likely to have a faster decision cycle and better information on which to base decisions. Its networks are likely to be more resilient when under attack, preserving its ability to maintain situational awareness, defend its forces, engage targets effectively, and protect the integrity of its command, control, and communications. It will also be able to control swarms of unmanned systems in the air, on the water, and under the sea to confuse and overwhelm an adversary.11References*This paper was presented at International Studies Association 65th Annual Convention. San Francisco, California April 3rd – 6th 20241 Copeland, B. (2024, March 15). Artificial intelligence. Encyclopedia Britannica. https://www.britannica.com/technology/artificial-intelligence2 How do you define artificial intelligence? ChatGPT, GPT-4 Turbo, OpenAI, 2024, October 25. https://genai.hkbu.edu.hk/3 How do you define artificial intelligence in the context of the military? ChatGPT, GPT-4 Turbo, OpenAI, 2024, October 25. https://genai.hkbu.edu.hk/4 Fontes, R. and Kamminga, J. (2023, March 24). Ukraine A Living Lab for AI Warefare. National Defence. NDIA’s Bussiness Technology Magazine. https://www.nationaldefensemagazine.org/articles/2023/3/24/ukraine-a-living-lab-for-ai-warfare5 Brumfiel, G. (2023, December 14). Israel is using an AI system to find targets in Gaza. Experts say it’s just the start. Wisconsin Public Radio. https://www.wpr.org/news/israel-using-ai-system-find-targets-gaza-experts-say-its-just-start6 ‘The Gospel’: how Israel uses AI to select bombing targets in Gaza. (2023, December 1). The Guardian. https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets 7 The IDF's Use of Data Technologies in Intelligence Processing. (2024, June 18). IDF Press Releases: Israel at War. https://www.idf.il/en/mini-sites/idf-press-releases-israel-at-war/june-24-pr/the-idf-s-use-of-data-technologies-in-intelligence-processing/8 Brumfiel, G. (2023, December 14). Israel is using an AI system to find targets in Gaza. Experts say it’s just the start. Wisconsin Public Radio. https://www.wpr.org/news/israel-using-ai-system-find-targets-gaza-experts-say-its-just-start9 Bowman, V. (2023, June 2). AI drone 'killed operator' after going rogue on simulation. The Telegraph. https://www.telegraph.co.uk/world-news/2023/06/02/us-air-force-ai-military-drone-goes-rogue-simulation/10 Chen, S. (2024, June 16). Chinese scientists create and cage world’s first AI commander in a PLA laboratory. South China Morning Post. https://www.scmp.com/news/china/science/article/3266444/chinese-scientists-create-and-cage-worlds-first-ai-commander-pla-laboratory?module=top_story&pgtype=homepage 11 Flournoy, Michèle A. 2023. “AI Is Already at War.” Foreign Affairs 102 (6): 56–69. https://search.ebscohost.com/login.aspx?direct=true&AuthType=shib&db=bth&AN=173135132&site=ehost-live.