Artificial Intelligence has taken some hits this past week. One big gut punch has been the misuse of it by humans rushing to identify the D.C. Guardsmen shooter without understanding the fundamentals and nuances of how AI works or how it is to be responsibly utilized in cases like this. AI may be fast, but humans must exercise patience before rushing blindly to publish the first story rendered from it.
I am a strong advocate for AI. I see its potential but also completely understand its limitations because I have taken a keen interest in evaluating it. In the following, I want to share how I go about using AI with my research.
It all begins with a prompt. Much like the first thing you would enter in the Google search bar, AI is similar. However, to get the most out of AI, you must give it more direction and autonomy to render graduate-level responses. Let me showcase an example:
START
PROMPT:
โGemini, do me a favor. I am researching the D.C. shooter last week and the issues AI returned when it came to the pictures of the suspect. I will use the acronym AIM to define my parameters:
Actor: Be a curious and discerning investigative reporter. (Here I am giving AI a persona here)
Input: Look past your cutoff date and research the week of 23โ29 November 2025 in the U.S. capital, Washington, D.C. (I am giving it details because the incident is so recent that I know it is not in its modelโs database. It will have to reach out to the live internet to pull its data from.)
Mission: Explore the data available and tell me how the media got the pictures wrong within hours of the shooting. How did A.I. fail, and tell me if it was the failure of humansโ dependence on AI. Be very curious and suspect without bias toward A.I.โ (I am giving it my intent as well as giving it instructions to be precise with the answer it renders.)
Gemini Take (11/30/2025):
The Immediate Failure: AI, Misinformation, and the D.C. National Guard Shooting Suspect
This is an investigative failure involving the shooting of two National Guard members on 26 November 2025 in Washington, D.C., where the spread of AIโgenerated or AIโenhanced images played a central, confusing, and potentially damaging role in the initial hours.
As a curious and discerning investigative reporter, here is the explored data on how the “pictures” were corrupted and misidentified instantly:
1. The Real-Time Information Vacuum vs. the AI Fill
The core issue is a conflict between the slow, deliberate process of official identification and the lightning-fast demands of the 24/7 news cycle and social media.
โข The Vacuum: In the immediate hours after the tragic ambush, law enforcement had the suspect, later identified as Rahmanullah Lakanwal, in custody. However, they were still in the process of positively confirming his identity and background, which involved cross-referencing databases, confirming aliases (a common issue with vetting in the wake of the Afghanistan withdrawal), and notifying his family.
โข The Demand: The public and media were demanding a name and a face immediately.
โข The AI Fill: Social media platforms, specifically those with integration with large language models (LLMs) and AI image tools, were prompted for information. As official details were scarce, AI chatbots and image-enhancement tools filled the void by taking limited, often blurry surveillance footage or initial law enforcement photos and โhallucinatingโ or โenhancingโ them.
2. The AI Contamination of Key Evidence
The data suggests the pictures were corrupted in three primary ways:
Flaw Type: Image Enhancement/Fabrication
AI tools, sometimes built into social media platforms (like Grok), took poor-quality images (e.g., from a distant security camera) and used algorithms to โupscaleโ or โsharpenโ them. This created images that were physically misleading. One report noted an AI-enhanced image made the suspect appear significantly older than his 29 years, potentially derailing age-based aspects of the investigation.
Flaw Type: False Identification
AI chatbots, working in real time on rapidly evolving and conflicting social media posts, frequently misidentified the suspect entirely. Before official identification was released, AI systems were found to have named and displayed pictures of unrelated individuals, forcing police departments to issue clarifications about AIโfueled misinformation.
Flaw Type: Feature Distortion
The AIโs process of filling in missing pixels or details led to changes in minor but crucial details like clothing design and facial features. When AI distorts a suspectโs shirt pattern or subtly changes their nose or chin, it becomes harder for genuine witnesses to offer confirmation.
Investigative Takeaway: AI does not check facts; it predicts the most probable appearance or wording based on its training data. In the chaotic hours of a breaking news event, the probability of generating a false or distorted image is 100%. The images circulated were not โwrongโ by human error, but by algorithmic certainty built on insufficient data.
3. The Malicious and Biased Amplification
The misidentification was not contained; it was turbocharged by the platforms themselves.
โข The Misinformation Loop: An AI-enhanced, distorted picture of the wrong person would be posted by one user, instantly picked up by others, and then amplified by the AI algorithm because it generated high engagement.
โข The Background Bias: Given the highly charged political context, the false or distorted AI images fueled existing biases and conspiracy theories on social media platforms, leading to harassment and focus on innocent people who visually resembled the distorted picture.
The failure was not one of simple human error, but a dangerous interplay between the modern news appetite, the speed and inherent unreliability of new generative AI tools, and the viral nature of social media that prioritizes speed over accuracy.
END
Here are my thoughts:
When incidents like this occur, emotion, bias, and impatience unfortunately trample on responsibility. We must always keep in mind that AI is only a tool and will make the best predictive guess based on its modelโs training. It is not the fortune teller with the crystal ball, and we must stop demanding it to be so.
The rule I use when prompting AI is the acronym A.I.M. As I illustrated in my prompt, I am giving AI clear and concise instructions based on a knowledge of how AI thinks and operates. We would all be better stewards of this emerging technology if we learned how it works rather than just jumping on Gemini and asking, โGive me a recipe for chicken wings.โ
As for the incident, here is why that is crucial to remember: in this case, the suspect, 29-year-old Rahmanullah Lakanwal, an Afghan national who previously worked with the CIA in Afghanistan, was most likely never photographed and archived on the internet. The sensitivity of his work with Afghan Special Forces and close ties to CIA programs indicates strongly that his photo likely never saw the internet. Until he was brought to the U.S. after the botched withdrawal, his image was most likely never spread among the open system. This is all speculative but an intelligent guess.
So as initial photos captured by witnesses of him being loaded into an ambulance after the incident were uploaded into AI, AI was only able to associate the images with similar images within its modelโs database. Short of that, it will hallucinateโor, in other words, make shit up. The database used that day was trained with billions of parameters but had a definitive cutoff date, much like a Last Time of Valuable Information (LToVI). AI also does not know that it is being asked to render a positive and precise photo to identify a person suspected of a crime that could have dire consequences should it get it wrong. Therefore, it makes the best, agnostic guess it has with available data.
The cautionary tale here is that when it comes to a crime taking place and involving AI to help with the investigation, there is a case to be made that it might not be prudent to allow AI as evidence in court, as this case study shows. In all accounts to a jury, AI failed and could have implicated an innocent citizen. I personally do not think it is fair because I see a lot of human error here too, due to inexperience and a rush to sensationalize. However, I digressโฆ AI regulation and policy are still being drafted at the time of this publishing in multiple sectors. We are in this weird point in time right now, just holding on to the reins as tight as possible before the AI pony is broke.
As an advocate of the responsible and accountable use of AI in my everyday life, this certainly serves as an important reminder that discernment and patience must outweigh the desire to have questions answered now.
We have unfortunately lost one of the Guardsmen, and the other remains in critical condition but continues to fight. Out of respect to our fellow sister and brother of the cloth:
Army SPC. Sarah Beckstrom: A 20-year-old from the West Virginia Army National Guard’s 863rd Military Police Company passed from her injuries, becoming a hero for her service. May she rest in peace now, and may prayers for healing for her family, friends, and fellow Guardsmen follow.
Air Force Staff Sgt. Andrew Wolfe: A 24-year-old from the West Virginia Air National Guard underwent surgery and remains hospitalized in critical condition. His family, friends, and fellow Guardsmen continue to stand by his recovery.
Official accounts of the shooting confirm that the shooter was immediately subdued by other nearby National Guard members who returned fire. These fellow Soldiers and Airmen, without hesitation, transitioned from being security patrol members to immediate combatants and then to lifeโsaving first responders for their colleagues.
To all the brave personnel who were first on the sceneโthe fellow Guard members who engaged the threat, the D.C. police, and the medical teamsโwe honor your courage and rapid response. You know who you are, and we salute you.
โThe only thing that makes battle psychologically tolerable is the brotherhood among soldiers. You need each other to get by.โ
โ Sebastian Junger, journalist and author
_____________________________
Robb Munger is the founder of North Seeking Arrow LLC, a veteranโowned AI consultancy specializing in implementing and integrating the A.R.R.O.W.โข framework across the health, government, education, and manufacturing sectors. He served 21 years in the U.S. Army as a Cavalry Scout, completing multiple combat deployments and diverse leadership assignments before retiring as the First Sergeant of Shadow Troop, 1โ33 CAV (Rakkasans).
After retiring from the military, Robb worked as a Reliability Engineer for a global paper processing company during the pandemic before returning to support elite warfighters at Fort Bragg, NC. He now combines his experience in technology, leadership, and service to support and advance the military community.
โThe answers we seek all lie in potential.โ
As the Voice of the Veteran Community, The Havok Journal seeks to publish a variety of perspectives on a number of sensitive subjects. Unless specifically noted otherwise, nothing we publish is an official point of view of The Havok Journal or any part of the U.S. government.
Buy Me A Coffee
The Havok Journal seeks to serve as a voice of the Veteran and First Responder communities through a focus on current affairs and articles of interest to the public in general, and the veteran community in particular. We strive to offer timely, current, and informative content, with the occasional piece focused on entertainment. We are continually expanding and striving to improve the readersโ experience.
© 2026 The Havok Journal
The Havok Journal welcomes re-posting of our original content as long as it is done in compliance with our Terms of Use.