by MSG Woody Woodward, US Army
The battlefield has grown vertically, horizontally, and virtually expanding into multiple complex domains, adding an enormous amount of data that warfighters must capture, compile, correlate, synthesize, and analyze to make split-second decisions that can be the difference between life and death (Grooms, 2019). What if a computer equipped with sophisticated Artificial Intelligence could make that decision-making process exponentially faster, more efficient, and more effective? Integrating Artificial Intelligence and Automated Battle Management Aids (ABMA) into the military’s systems will enhance its ability to conduct everything from daily business to military actions on the battlefield by reducing the time it takes to plan, process, automate, and augment the information that typically takes a human to process, analyze and or exploit.
What is Artificial Intelligence (AI) though? In the beginning, AI was an automated mechanical-driven algorithm that completed tasks with little to no human interaction. However, it has since evolved into advanced machine learning through computer technology designed to mimic human intelligence (Ashby et al., 2020, pp. 8-9). Many people do not realize AI has been around for over a century; in 1914, Leonardo Torres Y Quevedo, a scientist, and inventor, created a chess game that could react to any move made by an opponent (Byrnes, 2016). The algorithm was based on a reactive and counteractive mechanical formula and formed the structural base for the design (Byrnes, 2016). AI algorithms are now self-operating using computers versus mechanical structures. They can figure out solutions to problems using available information and flexible rules, unlike the earlier primitive designs (Haaxma-Jurek et al., 2021). The two categories of AI systems used today are supervised learning and unsupervised learning (Burt et al., 2022).
Supervised Learning
Supervised learning involves providing the AI with labeled data sets where the correct output is known (Burt et al., 2022). By manually feeding the exact known specifications of objects to be identified by the AI system, it is essentially learning how to identify adversaries’ assets through specified data points (Datategy, 2023). An example of this would be if one were to give an image of a Russian MiG 29 fighter jet to an AI program and ask it, what it thinks the image is. The MiG 29 data points would need to transpose into the AI with every specification of the MiG 29 to include attributable identification markings. This would allow the airplane to be associated with a specific country with the Mig 29-type aircraft in its arsenal (Datategy, 2023). Because it has exact specifications, the AI system can conduct a triage assessment of battle damage with a high confidence percentage of accuracy. Utilizing supervised learning, the AI can digest and analyze feeds from multiple military intelligence surveillance and reconnaissance platforms. This would allow for the most accurate identification and assessment of the adversary’s military assets to provide decision-makers with near real-time critical intelligence to make informed decisions.
Image Source.
Unsupervised Learning
Unsupervised learning is an AI that starts with minimal data sets and uses either synthetically created data points or retrieves data from online resources it selects to produce an answer or an output (Burt et al., 2022). The difficulty or reservation about unsupervised learning is the validity of the data sources themselves. If the AI were to utilize a data source that can be easily altered like Wikipedia it could mistakenly provide an inaccurate response. An example of this is Google’s Language Model for Dialogue Applications (LaMDA) AI which uses unsupervised learning to develop the conversation with whom it is engaged (Collins, 2021). The more LaMDA converses, the faster it learns subtle language nuances and can use open-source language resources to continue a free-flowing conversation (Collins, 2021). Both supervised and unsupervised AIs have their respective place in the Department of Defense (DoD).
AI integrated Systems in the DoD
Although AI has become quite the topic recently, the military has been using AI-enabled systems in training for almost two decades. For example, in the early years of the Iraq and Afghanistan wars the Engagement Skills Trainer (EST) 2000 was implemented. It supplied a virtual realistic weapons training platform that allowed Soldiers to practice and hone their marksmanship skills without ever setting foot on a live range (Mittal & Schloo, 2020). The EST 2000 trainer was a system developed that uses supervised AI machine learning in an augmented reality environment. This allows Soldiers to experience live shoot, do not shoot type scenarios where the system would use the Soldier’s reactions to a situation and alter the scenario response based on the Soldiers’ actions (Mittal & Schloo, 2020). The EST system has evolved over the years; it now can accommodate simulated vehicle convoys, and open terrain scenarios, using multiple weapons systems with command-and-control systems integrated to allow for squad and platoon-level training (Department of the Army, 2018). AI-enabled training simulators have evolved even further into large-scale combat operations exercises.
In late 2021 Supreme Headquarters Allied Powers Europe (SHAPE), the military arm of the North Atlantic Treaty Organization (NATO), led a Virtual Air Operation Exercise called Spartan Warrior (Allied Air Command Public Affairs Office [AAC PAO], 2021). This allowed the entire NATO alliance and allied partners to take part in a mixed virtual and live simulation exercise using AI-integrated systems to conduct large-scale air operations in the European theater against a fictitious enemy (AAC PAO, 2021). The Warfare Center, using an AI virtual and integrated live training system, allowed the participants from around the world to connect to the training network and share a wide range of intelligence, targets, manned and unmanned intelligence surveillance and reconnaissance platforms, and attack air assets based out of seven different countries ranging from the United States (U.S.) to Germany to the Netherlands (AAC PAO, 2021). The exercise involved air assets from multiple locations across Europe and Joint Terminal Attack Controllers (JTAC) based in Baumholder warfare center Germany as well (AAC PAO, 2021). The JTAC and allied partners using AI-integrated systems were able to hone their tradecraft using both simulated and live training targets (AAC PAO, 2021). This is an excellent example of why the DoD should put forward more efforts for AI-enabled ABMA systems.
ABMA System
The design of ABMA systems is not to do the job of a specific warfighting function but to augment and supply correlated and synthesized information that helps commanders or decision-makers make an informed decision (Grooms, 2019). Arguably, command and control is one of the most challenging jobs in the military, especially for tactical commanders, who are inundated with enormous amounts of information, data, intelligence, and decisions. Some of the decisions have to be made with very little or overwhelming pieces of information in a very short amount of time (Grooms, 2019). ABMA systems are formulated to help by augmenting and supplying correlated and synthesized information that allows commanders and decision-makers to make informed decisions (Grooms, 2019). An example of this was the XVIII Airborne Corps’ first quarter of 2022 training exercise. They used AI-enhanced target detection training systems that could detect, analyze, and develop targeting guidance, coordinate with the proper attack asset, and neutralize the threat or target (Vergun, 2022). The exercise culminated with an F-35 fighter jet dropping live ordnance on an AI-derived target and putting the bomb within three feet of the AI-provided grid coordinate (Vergun, 2022). This is an excellent example of AI-enabled ABMA systems supporting command and control (C2) and fires warfighting functions, but these two warfighting functions would only be possible with intelligence.
The intelligence warfighting function is another area where AI-enabled ABMA systems can assist. The amount of intelligence surveillance and reconnaissance sensors, data feeds, data repositories, and information funneling into the intelligence section for analysis is overwhelming for a human analyst. The sheer size and amount of data analysis that must take place to correlate and synthesize for the commander to make an informed decision is astronomical on today’s advanced battlefield. An AI-enabled ABMA intelligence system would allow for enormous amounts of data to be collected, processed, exploited, and disseminated in a synthesized, easily understandable manner. Allowing the decision-makers to digest and make quick, relevant, and decisive decisions. Another system within the DoD and, more specifically, the Department of the Army that could really use AI would be Logistics (Office of Inspector General [OIG], 2014).
According to a 2014 DoD Inspector General report, in 2013, the Department of the Army far surpassed its forecasted parts and supplies budget by almost $84 million (OIG, 2014). The report goes further on to say that the primary issues found were improper updates to usage rates for spare parts, ordering wrong parts for specific projects, inaccurate coding of spare parts for projects, and failure to provide adequate accountability both physically and with proper documentation (OIG, 2014). An AI-enabled Logistic system may be the answer; a recent article published in the Multidisciplinary Digital Publishing Institute (MDPI) addresses this very issue with a case study on South Koreas logistical forecasting of consumable parts for the K-X tank and munitions (Kim et al., 2023). The study found that by layering multiple types of specific AI algorithms, they were able to forecast an entire year’s worth of consumable parts for both the tank and the munitions (Kim et al., 2023). The article goes further into how running the forecasting AI, they were able to determine the cost savings based on accurate allocations for spare parts/parts replacements (Kim et al., 2023).
One cannot argue that the military would come to a screeching halt without logistical supplies and constant resupply of everything from fuel, tires, and pens to O-rings for an M-1 Abrams tank. If the Department of the Army were able to develop and implement an AI-Enabled logistic program capable of proper forecasting for pace items like the Stryker, Tanks, and Bradley’s, down to individual weapon systems, they could save time, money, and quite possibly lives. As you can see from the three examples provided, the development and implementation of AI-enabled ABMAs would decrease the time for tactical decisions and increase the security, sustainability, and lethality of the U.S. military (Grooms, 2019). One of the challenges or concerns that decision-makers have with AI or AI-enabled ABMA systems is an ethical quandary.
Germany, May 12, 2023. The M1A1 tanks will be used for the training of Ukrainian Armed Forces personnel. 7th Army Training Command will lead the events in Germany on behalf of U.S. Army Europe and Africa. (U.S. Army photo by Spc. Adrian Greenwood) Source.
AI Ethical Considerations
With AI and ABMA comes ethical concerns from the academic community, military community, Intelligence Community (IC), and the leaders of the U.S. Many of the concerns have to do with AI becoming autonomous, meaning that it is learning on its own and developing its own opinion or a way forward. How does one keep a self-learning AI within the ethical norms of society or the legal realm? The IC has developed what they call an “Artificial Intelligence Ethical framework for the IC” (Office of the Director of National Intelligence [ODNI], 2020a). The architecture of the framework is to define goals, find risks associated with using AI, and identify the legal requirements for using AI (ODNI, 2020a).
Another consideration was accountability; at what point is a line drawn requiring further action or decisions by a human? There must be proper accountability for actions taken based on AI-generated analysis (ODNI, 2020a). Every analyst in the IC must take bias training to be able to recognize their own bias and how to refrain from it affecting their analysis (ODNI, 2020b). The IC must make a conscious effort to stay objective and refrain from allowing their personal bias to skew the analysis (ODNI, 2020a). The other principles are ensuring the AI is working correctly by assessing it, and another is tracking versions and their evolution (ODNI, 2020a). The last few principles focus on the purpose, left and right limits, outcomes of the finished AI products, and transparency throughout the entire AI system (ODNI, 2020c). AI systems must have checks and balances to ensure that it is still functioning as intended and within the defined parameters, and lastly is the requirement to be good stewards and ensure accountability of the “Training, Data, Algorithms, Models, and outputs of the model’s documentation” (ODNI, 2020c).
Fictitious Unethical Use of AI-Enabled Systems Scenario
Imagine a nation not using AI ethical boundaries: A deployed unit conducting operations along the Russian border, tasked with observations of incursions into allied protected areas. The military allows Soldiers to use personal cell phones and to have commercial internet in their living quarters to stay in touch with family and friends. In their off time, Soldiers are doing what Soldiers do, surfing the internet, contacting family, chatting with people on social media, and conducting regular everyday business.
One of the Soldiers, while working his post, has accidentally brought his phone with him, which is typically not a big deal. However, mid-shift, the Soldier receives a video call from a woman he has been messaging online for the last couple of weeks (Ibrahim, 2022). In response to the call, he is awestruck at how beautiful the woman is and starts a deep conversation with her. He quickly loses focus on his mission, leaving the lines of passage unobserved, allowing three unmarked vehicles to cross the border. The three-sport utility vehicles are carrying a team of Russian Spetsnaz intending to conduct disruption operations in the protected allied area. Although this is a fictional scenario, it is plausible and an example of a sophisticated unsupervised learning AI that can hold a conversation by accessing open-source data to answer or reply to questions that the Soldier was asking or talking about. It is also an excellent example of using multiple-domain operations and unethical use of AI (Department of the Army, 2019; Ibrahim, 2022).
How did this happen? The commercial internet and personal cell phones allowed Russian intelligence to access, target, and then infiltrate the security forces personnel in the area. AI-enabled chatbots probed, distracted, and gained valuable intelligence from the Soldiers, allowing nefarious actors behind friendly lines. Could this be preventable? Could AI enhance security protocols to stop enemy AI systems and Spetsnaz from gaining access to the area of operations?
Despite the military’s attempts to improve morale amongst the troops, these locally commercially supplied services do not protect from attacks like this. The military must provide Soldiers with secure means to communicate with family and friends back home. The Department of Defense must implement AI-enhanced security protocols to detect, target, and deny these nefarious actors, lowering the risk to the mission and to the troops.
Flag of Russia’s Foreign Intelligence Service. Source.
Conclusion
With the implementation of AI and ABMA into the military’s systems it will enhance the military’s ability to conduct everything from daily business to military actions on the battlefield by reducing the time it takes to plan, process, automate, and augment the information that typically takes a human to process, analyze and or exploit. Ultimately, AI is here to stay and will continue to evolve and grow exponentially over the next few years. With dwindling resources, funding, and a lack of personnel, AI-integrated virtual reality will dominate military exercises in the U.S. and abroad. Systems will require left and right limits to help protect this great nation’s freedoms and civil liberties. The implementation of Integrating AI and ABMA into the military’s systems will allow Soldiers to complete the mission smarter, faster, and with confidence and return one of the most precious commodities the military cannot get back, Time!
References
Allied Air Command Public Affairs Office (Ed.). (2021). Spartan warrior sees NATO members come together in virtual air operations exercise. Allied Air Command. https://ac.nato.int/archive/2021/Spartan Warrior NATO virtual air operations exercise
Ashby, M., Boudreaux, B., Curriden, C., Grossman, D., Klima, K., Lohn, A., & Morgan, F. (2020). Military applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. RAND Corporation. https://www.rand.org/pubs/research_reports/RR3139-1.html
Burt, A., Greene, K., Hall, P., Perine, L., Schwartz, R., & Vassilev, A. (2022). Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. National Institute of Standards and Technology (NIST). https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf
Byrnes, N. (2016). A timeline of AI: after a century of ups and downs, artificial intelligence is getting smarter. MIT Technology Review, 119(3), 62-66 https://link.gale.com/A time line of AI: after a century of ups and downs, AI is getting smarter.
Collins, E. (2021). LaMDA: Our Breakthrough Conversation Technology [web blog]. https://blog.google/technology/ai/lamda.
Datategy. (2023). Future military AI/ML Military aircraft recognition using Papai. https://datategy.net/2023/01/12/military-aircraft-recognition-using-papai.
Department of the Army. (2018). Close combat tactical trainer (CCTT). United States Army Acquisition Support Center. https://asc.army.mil/web/portfolio-item/CCTT.
Department of the Army. (2019). Operations. (ADP 3-0). https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN18010-ADP_3-0-000-WEB-2.pdf.
Grooms, G. (2019). Artificial intelligence applications for Automated Battle Management Aids in Future Military Endeavors. Defense Technical Information Center. https://apps.dtic.mil/sti/citations/AD1080249.
Haaxma-Jurek, J. (2021). Artificial Intelligence. In J. Longe & K. Nemeh (Eds.), The Gale Encyclopedia of Science (6th ed., Vol. 1, pp. 343–348). https://link.gale.com/apps/doc/Artificial Intelligence.
Ibrahim, N. (2022). ‘we are not prepared’: Russia uses artificial intelligence, deep fakes in propaganda warfare – national. Global News. https://globalnews.ca/news/8716443/russia-artificial-intelligence-deep-fakes-propaganda-war/
Kim, J., Kim, T., & Han, S. (2023). Demand forecasting of spare parts using artificial intelligence: A case study of k-x tanks. Mathematics, 11(3), 501–511. https://link.gale.com/Demand forecasting of spare parts using AI
Mittal, V., & Schloo, R. (2020). Effectiveness of the Engagement Skills Trainer 2000. General Donald R. Keith Memorial Capstone Conference | United States Military Academy West Point. http://www.ieworldconference.org/content/WP2020/Papers/GDRKMCC_20_45.pdf
Office of Inspector General. (2014). (rep.). Army Needs to Improve the Reliability of the Spare Parts Forecasts It Submits to the Defense Logistics Agency. https://www.dodig.mil/reports.html/DODIG-2014-124.
Office of the Director of National Intelligence. (2020a). Artificial Intelligence Ethics Framework for the Intelligence Community. DNI.gov. https://www.dni.gov/files/ODNI/documents/AI Ethics Framework for the IC 10.pdf.
Office of the Director of National Intelligence. (2020b). ICEEOD small steps toolkit. DNI.gov. https://www.dni.gov/files/EEOD/documents/ICEEOD Small Steps Toolkit public.pdf.
Office of the Director of National Intelligence. (2020c). Principles of Artificial Intelligence Ethics for the Intelligence Community. DNI.gov. https://www.dni.gov/files/ODNI/documents/Principles of AI Ethics for the IC.pdf.
Vergun, D. (2022). Artificial Intelligence, autonomy will play a crucial role in warfare, general says. U.S. Department of Defense. https://www.defense.gov/AI autonomy will play crucial role in warfare.
________________________________
Woody W. Woodward is an active-duty Army Master Sergeant serving as a Student at the United States Sergeants Major Academy. As a Senior Intelligence Analyst, MSG Woodward has deployed four times in support of multiple named operations in both Iraq and Afghanistan. MSG Woodward holds a Bachelor of Science in National Security from Excelsior University and an Associate of Applied Science from Cochise College in Intelligence Operations.
As the Voice of the Veteran Community, The Havok Journal seeks to publish a variety of perspectives on a number of sensitive subjects. Unless specifically noted otherwise, nothing we publish is an official point of view of The Havok Journal or any part of the U.S. government.
© 2023 The Havok Journal
The Havok Journal welcomes re-posting of our original content as long as it is done in compliance with our Terms of Use.