Hopp til innhold

Tilbake til arkivet

Owen J. Daniels, Center for Security and Emerging Technology Nr. 1 April 2024

Stuck between real and ideal: the OODA loop and military AI

Artificial intelligence (AI) is often heralded as a technology that will potentially transform warfare. Its ability to parse large amounts of data quickly and produce outputs that humans can use to help solve problems holds military appeal for warfighters, decision-makers, and policymakers alike.

Tema: AI og autonome systemer
Lesetid: 11 min

AI has proven capable of outperforming humans at certain tasks, for example beating humans in complex strategy games, and AI-enabled autonomous systems seem to hold the potential to bring additional mass, lethality, and speed to operations.1 This last factor in particular, speed, has received significant attention from military thinkers. Some have used the idea of the OODA loop, an acronym for the mental model of Observe-Orient-Decide-Act used by fighter pilots, to think about how humans might harness AI to gain advantages in combat by taking decisive action more quickly than their adversaries. In this interpretation, warfighters can outpace or “hack” the OODA loops of the enemy with technology.2  

The OODA loop is useful as a high-level heuristic model. But when it comes to thinking about AI and the future of warfare, theorizing about technology’s impact using the OODA framework alone may be too simple and could limit creative thinking about the different ways AI may best be employed.3 As technologists, warfighters, and policymakers consider how best to capitalize on AI’s promise, they need the ability to separate the real and the ideal when it comes to AI applications.

ASSUMPTIONS

The OODA loop, a mental model of tactical decision-making developed by US Air Force Colonel John Boyd during the Cold War, has won admirers from around the globe in a variety of fields. The model was intended to help dogfighting pilots gain marginal advantages in air-to-air encounters by processing and reacting to developments more rapidly than their opponents. By drawing on rigorous training incorporating lessons drawn from education, study, and military culture, a pilot who could cycle through the various OODA loops of an encounter would be more likely to win.4

In an hypothetically ideal OODA loop scenario using AI, a system would first help a human decision-maker observe a scenario and orient. Autonomous vehicles and other AI-enabled sensors might collect information on an enemy’s movements on the battlefield and feed into a recommender system. This recommender system would draw on the aggregated signals captured from different ISR platforms, reconciling redundancies, analyzing gaps, and presenting a clear operating picture to a human commander. AI systems would theoretically be able to do this more quickly and efficiently than existing processes and systems, allowing a commander to decide how to proceed. The commander’s course of action, speedily arrived at with the help of AI, could provide him or her with a tactical or operational advantage over the adversary. AI may even be involved in the action, for instance guiding a weapons system to its target.

This highly stylized hypothetical is understandably appealing, but it rests on several important assumptions. First, it assumes technology will perform predictably, reliably, and consistently. It assumes that commanders and warfighters will trust the outputs of AI systems and be able to arrive at decisions given some baseline knowledge of a system’s workings and inputs, without simply defaulting to the recommended course of action offered by the system. In the future case of AI-enabled weapons systems, it assumes commanders will be sufficiently confident in the functioning of these systems to assume responsibility for the outcomes of decisions to use force commensurate with appropriate levels of human judgment or meaningful human control.

Upon closer examination, this hypothetical example may be more useful as a guiding vision for the future applicability of AI capabilities than as a realistic portrayal of how warfighters can expect to capitalize on AI’s potential. It outlines an ideal future for AI from a military perspective, one that is not burdened by the real limits of present technology or considerations for the fog and friction of war. By walking through key questions related to the assumptions mentioned above, it is clear that without careful consideration this vision could become a mirage.

The first question is related to context. In which setting is an AI system speeding a human’s decision-making process? In a combat scenario such as Boyd’s dogfights, where microseconds may mean life or death? Or at the operational or strategic levels, where commanders may have more time to make decisions while deliberating over vast troves of information and data?

As technologists, warfighters, and policymakers consider how best to capitalize on AI’s promise, they need the ability to separate the real and the ideal when it comes to AI applications

The use context informs the second question, namely, what type of AI is being used? AI refers to a range of systems, from computer vision and navigation to big data analytics and decision support. AI is a notoriously difficult term to define, given that it can apply to a range of technological capabilities. To draw on two relatively familiar examples, is the system in this scenario an AI-enabled autonomous co-pilot like Loyal Wingman? Or a recommender system for battlespace awareness and management, similar to how the Joint All-Domain Command and Control (JADC2) system has been described?5 Systems-of-systems like these will probably rely on more than one AI capability, such as computer vision algorithms, sensor inte­gration, and big data analytics, to produce outputs for human decision-makers. Multiple AI nodes like these will make it difficult to identify precisely how AI is influencing a human’s decision-making.

Recent AI advances in areas like large language models (LLMs) like ChatGPT have been greatly impressive, and offer a window for how this type of generative AI model might one day help humans parse information quickly and effectively. Yet we are not yet at the stage where speed should be our primary concern with these and other models. Output quality, reliability, and trust are not sufficiently well-established with new and evolving AI systems that we can be entirely confident of their ability to consistently help humans make the correct decisions, let alone make them at rapid speed.

For example, LLMs have several issues that would be of concern to decision-makers in high pressure scenarios. First, AI systems need comprehensive data resources to perform reliably, resources which militaries may not yet have consistently at their disposal. Second, systems like LLMs have been shown to “hallucinate,” or to provide answers to user queries that are not factual or found in training data. Third, the inner workings of these and other AI systems can be opaque, offering users imperfect insights into how a system arrived at a conclusion. The AI field is ever-evolving, and technologists are looking for ways to address these challenges, such as interpretable models and small data learning systems. Nonetheless, without major strides against these problems and progression in how militaries test, evaluate, verify, and validate AI models, the reliability of the technology for certain critical missions and scenarios, particularly life-or-death ones, will likely be limited.

Calibrate how much we trust the AI performance

Another key consideration is the relationship between humans and machines, which is not new in a military context. Despite its technological novelty, AI remains a tool, and part of its effectiveness will lie in humans’ ability to understand, trust, and harness it. The human side of the human-machine equation will be equally important, and will require AI literacy on the part of both decision-makers and operators. Humans will need to calibrate how much they trust the performance of AI systems so that they can use them to make better decisions or to offload responsibility for certain tasks. Ideally, they will need to do so without second-guessing the system’s performance. At the same time, though, the complexity of AI systems, the opacity of their reasoning processes, and time pressure in certain decision-making scenarios, could lead humans to default to trusting the “orientation” provided by AI systems without question. Such automation bias could lead to mistakes, misinterpre­tation, or even unintentional escalation. It is not uncommon that warfighters rely on machine outputs without much second thought, particularly in scenarios where time is of the essence. However, AI models require different methods of testing and verification than other systems given their unique characteristics and risks from unpredicted or emergent behaviors, and many are not at the point where their outputs should be blindly trusted for critical functions.

As technology developers and militaries continue to work at problems like these facing human-machine teams, military thinkers would do better to focus on how AI can help improve decision and action quality, including acting at the moment least advantageous to the enemy, rather than speed alone.6 The emphasis on speed is understandable: militaries need to seek out competitive advantages against potential adversaries. However, prioritizing only operational speed may overlook the shortcomings of present technological capabilities and limit new thinking about AI’s usefulness for different kinds of operational problem­-solving.

Considering Competitors

Conversation about speed and outpacing the enemy’s decision-making cycle are important. The military or even warfighter who is able to orient to strategic or tactical environments more quickly may gain advantages in certain combat scenarios. Some have argued that competitors to the United States and NATO Allies and partners, such as Russia or China, will undoubtedly use AI to speed their operations, and that authoritarian competitors might feel less constrained to use military AI applications that could be considered in democratic states.

This line of reasoning requires serious conside­ration. Russia and China’s competitors should be careful to avoid mirror-imaging around these two countries’ potential uses of AI. For example, it is unclear whether Russia and China might devolve AI-enabled decision-making to lower levels of authority in the same manner that American or European militaries might. Again, human factors, such as military culture, hierarchy, and attitudes toward risk tolerance or aversion, will play a role in how different militaries adopt and deploy AI. In order to clarify competitors’ intentions around AI and to avoid technological surprise, NATO militaries should closely monitor technological developments in these countries through both classified and open-source intelligence collection. They should also carefully weigh reports about the deployment of AI-enabled technologies from Ukraine and other wars, given that commercial actors in the conflict may have incentives to exaggerate the extent to which certain capabilities are AI enabled.7

Despite its ­technological novelty, AI remains a tool, and part of its effectiveness will lie in humans’ ability to understand, trust, and harness it

Conclusion

The OODA loop is conceptually easy to grasp and pedagogically useful, which likely explains the popular traction it has gained outside of military circles over the years. However, the framework has limitations even beyond the complex world of AI development and deployment that are important to acknow­ledge. For example, as one ascends the various levels of decision-making to which the loop might be applied, from the tactical up to the strategic, the systems of loops must become sufficiently abstract such that they ultimately offer less utility. Put differently, it may be possible to conceptually attack large, multifaceted problems using an OODA approach, but given the likelihood of many granular details to be elided in the simple model, it is perhaps best to think of the OODA loop framework as a high-level roadmap, particularly when it comes to complex emerging technologies. 

It is possible that AI will transform warfare across a wide range of applications. Exploring these applications in specific detail, rather than fixating on the speed advantages AI will provide, will go further toward helping armed forces around the globe uncover and harness this technology’s true potential. It is important to remember that truly revolutionary military innovations are rarely generated by technology alone, but rather by the interplay of technical and intellectual advancements by strategists, analysts, and technologists working in conjunction. Hard thought must be afforded to the role of AI in operations beyond its impact on speed, including practical, cognitive, and ethical conside­rations. Artificial intelligence is unlikely to solve all of the problems that plague modern warfare, and the fog and friction of war will continue to evolve alongside technology. Policymakers must be clear-eyed and continually update their beliefs about the difference between the technology’s real and ideal states.

About the author:
Owen J. Daniels is the Andrew W. Marshall Fellow at Georgetown’s Center for Security and Emerging Technology (CSET). Prior to joining CSET, he worked in the Joint Advanced Warfighting Division at the Institute for Defense Analyses (IDA), where he researched the ethical implications of artificial intelligence and autonomy, autonomous weapons norms, and joint operational concepts, among other issues. 

  1. “Google AI Defeats Human Go Champion,” BBC News, May 25, 2017, https://www.bbc.com/news/technology-40042581.
  2. E.g., see Strickland, Frank, ‘Back to basics: How this mindset shapes AI decision-making’, Defense Systems, 30 Sep. 2019, https:// defensesystems.com/articles/2019/09/18/deloitte-ai-ooda-loop-oped.aspx and Wendy R. Anderson, Amir Husain, and Marla Rosner, “The OODA Loop: Why Timing is Everything,” Cognitive Times (2017): 28-29.
  3. Owen Daniels, “Speeding Up the OODA Loop with AI: A Helpful or Limiting Framework?”, NATO Joint Air Power Competence Center, 2021, https://www.japcc.org/essays/speeding-up-the-ooda-loop-with-ai/.
  4. Gross, George. M., ‘Nonlinearity and the Arc of Warfighting’, Marine Corps Gazette (2019): p. 44-47, https://mca-marines.org/wp-content/uploads/Nonlinearity-and-the-Arc-of-Warfighting.pdf.
  5. Hitchens, Theresa, ‘Exclusive: J6 Says JADC2 Is A Strategy; Service Posture Reviews Coming’, Breaking Defense, 4 Jan. 2021, https://breakingdefense.com/2021/01/exclusive-j6-says-jadc2-is-a-strategy-service-posture-reviews-coming/.
  6. Luft, Alastair, ‘The OODA Loop and the Half-Beat’, In The Strategy Bridge, 17 Mar. 2020. https://thestrategybridge.org/the-bridge/2020/3/17/the-ooda-loop-and-the-half-beat..
  7. Gregory C. Allen, “Russia Probably Has Not Used AI-Enabled Weapons in Ukraine, but That Could Change,” Center for Strategic and International Studies, May 26, 2022, https://www.csis.org/analysis/russia-probably-has-not-used-ai-enabled-weapons-ukraine-couldchange; and Andrew Imbrie, Owen Daniels, and Helen Toner «Decoding Intentions» (Center for Security and Emerging Technology, October 2023), https://cset.georgetown.edu/publication/decoding-intentions/. 

AI og autonome systemer

annonse