A couple of initial thoughts for perspective and transparency:
- AI is here to stay—get on board or be left behind.
- I used AI to help me write this article—notice I said “used” and “helped.” It did not write it for me; I used it to help me write—it wrote with me.
- I am excited about the potential of AI and was a pretty early adopter.
So…
When my kids’ middle school rolled out iPads years ago, I remember thinking, they’re ahead of the curve. It felt like evolution. But a year later, I realized what had really happened: the school had changed some things, but nothing had evolved. Instead of using those devices to unlock creativity or curiosity, they just became digital notebooks and textbooks. They replaced physical books, pens, and paper with screens. The students had smaller backpacks but were still memorizing, still copying, still turning things in. They were only getting about halfway up Bloom’s Taxonomy (remember, understand, and apply). Still, they could have gone further (analyze, evaluate, and create) if they had embraced the new platforms, seeing the capacity—not just the capabilities—of the students using the iPads. The tools changed; the learning didn’t.
That’s the difference between change and evolution, a theme discussed in Meditations of an Army Ranger. Change is replacement. We swap one tool for another and tell ourselves something magical will happen. We change to try something, to make something different—sometimes for a reason, sometimes for the sake of change. But evolution is something different. Evolution is purposeful. It’s about growth, improvement, and deliberate design—not just of the tool, but of how we use it so that new capacities are achieved through the new capability, and a deeper understanding can emerge.
And that distinction matters now more than ever, because the same mistake we made putting the iPads in schools, we’re at risk of repeating with AI.
Change vs. Evolution—and Why It Matters for AI
If we use AI to replace our thinking—to write our emails, plan our meals, or do our homework—we’ve simply changed the tool. We replaced books, encyclopedias, articles, the internet with a supercharged search engine. We might get a better answer, but we lose the experience of doing the research and learning the why. We just get an answer and move on—not only do we not learn anything, we actually get dumber (see the MIT study on this).
However, if we utilize AI to enhance our thinking—to challenge assumptions, expand creativity, and explore new ideas—then we are not only evolving the tool but also ourselves.
This is not about using AI to think for us; it’s about using it to think with us—what many refer to as hybrid intelligence. That’s where evolution happens.
This is not a new idea; it actually has its roots in the late ’90s. The concept of “Centaur Teams” originates in the chess world: after losing to IBM’s Deep Blue, grandmaster Garry Kasparov reimagined competition. Instead of pitting humans against machines, he created contests—often called “Advanced Chess” or “Centaur Chess”—where humans and AI worked as teammates. In these pairings, humans brought strategic insight and creativity, while the AI brought raw computational power and probability-driven calculations. Together, they often outperformed purely human teams or solo AI systems. This collaboration illustrated the next frontier: human-AI symbiosis—a concept that sounds like hybrid intelligence.
When we deliberately engage with our tools—whether it’s a rifle, a wrench, or a reasoning engine—we evolve. The weapon never made the warrior weaker—it made them more effective. The sword didn’t replace strength or skill; it extended it. The wheel didn’t make us lazy; it made us capable. The pen, the typewriter, and the printing press—these things didn’t make us worse; they increased our capability, our capacity, our range, and our impact.
AI has the potential to do what no invention before it could—amplify human capability and capacity by orders of magnitude previously unimaginable. But only if we partner with it, not just use it.
Hybrid Intelligence: Evolution in Action
Hybrid intelligence isn’t about outsourcing thought—it’s about augmenting it. It’s not about doing the same job faster; it’s about achieving outcomes in completely new ways.
Growth requires both thought and action—we’re not here to only think, and we’re not here to only act. We’re here to think and act. AI doesn’t change that equation—it amplifies it. It gives us the ability to explore further, test faster, and think deeper. But it also demands that we stay intentional—stay active—stay engaged—stay thoughtful—stay thinking—so that we don’t confuse activity with evolution.
AI requires us to use critical literacy. Critical literacy is the capacity to read, interpret, and engage with texts (in the broadest sense: written, visual, digital, auditory) not just for surface meaning, but for deeper context—asking: Who created this? What assumptions or biases shape it? What’s the relation of power or purpose behind it? It means recognizing that texts are socially and politically embedded—that language, media, and technology carry values, intentions, and often reflect power relationships. In other words, while literacy gets you reading and decoding, critical literacy teaches you to question and evaluate—it turns you from a passive receiver of information into an active interpreter of meaning.
Critical literacy has always mattered—it’s what keeps us from taking every “fact” at face value, from accepting first-draft answers as finished wisdom, and from confusing noise for insight. But in the era of large-language models and AI-powered chatbots, it matters even more. When tools like ChatGPT, Copilot, or others generate content at scale, the risk isn’t just laziness—it’s a flood of information that sounds credible but may carry biases, errors, or hidden assumptions. According to recent reporting, AI chatbots “routinely distort the news and struggle to distinguish facts from opinion.” In other words, teaching people to read, question, probe, and challenge what emerges from AI isn’t optional—it’s essential. If we skip that step, we may not just misuse the tool—we may lose the very edge of our thinking.
Critical literacy is the muscle we develop so that when AI hands us an answer, we don’t treat it like gospel—we treat it like a draft, a challenge, an invitation for exploration. In the hybrid intelligence era (human + AI), it’s not enough to have access to powerful tools. We must have the mindset to use them well, to question what they bring back, to shape what they cannot yet see. That’s how we evolve our thinking—not just change our tools.
If we actively partner with AI—if we use AI with purpose—we’ll avoid what happened with those iPads in the classroom: different on the surface, identical underneath. But if we use it to think differently, learn differently, and create differently, we’ll evolve into something greater than either humans or machines alone.
That’s not change.
That’s evolution.
Moving Forward—Together
AI, when used thoughtfully, can help provide an adaptive mindset to everyone. It doesn’t belong to one generation, one profession, or one sector. It belongs to thinkers—anyone willing to learn, question, and evolve.
Hybrid intelligence can democratize learning and opportunity in ways we’ve never seen. Anyone with access to technology can now explore ideas that used to require elite education or expensive mentors—be it a computer or a phone. A teenager in Kansas, a veteran in transition, or a teacher in Kenya can access the same intellectual resources once locked behind ivy-covered walls.
That’s the promise of AI when it’s used with purpose—not as a replacement for intellect, but as a catalyst for it. This technology is closing knowledge gaps and flattening hierarchies of access. When you pair that with human creativity and discipline, you get something extraordinary—the evolution of intelligence itself, shared across all of us.
The Moment We’re In
We’ve seen transformative tools before: the wheel, the steam engine, electricity, the internet. AI now joins that list of “general-purpose technologies”—tools capable of reshaping how we think, work, and create. According to MIT Sloan, AI may increase productivity growth by 20–50% in the coming decade if used wisely. Other reports show realistic gains of 6–16% when 20–40% of tasks are automated or enhanced.
But this moment isn’t just about productivity—it’s about potential. It’s about whether we evolve the way we think, not just the way we work.
Thinking Well with AI—Some Recommendations
Here’s how evolution could look in practice:
- Ask better questions—your AI is only as powerful as your curiosity.
- Engage in dialogue—treat AI’s answers as a first draft; push back, refine, and explore. Make it a thought-partner.
- Challenge assumptions—AI mirrors our biases unless we teach it to challenge them.
- Iterate faster—use AI to test ideas rapidly, not to skip the work of thinking.
- Build critical literacy—learn to read AI like you’d read anything—question, verify, and trust conditionally.
If we offload thinking, we weaken the muscle. But if we partner with AI to think better, we strengthen it.
Use AI like a fancy internet, and you’ll get a fancy internet.
Use AI like a thinking partner, and you’ll change how you think.
______________________________________
JC Glick serves as the Chief Executive Officer of The COMMIT Foundation. JC brings with him a wealth of experience as a leadership consultant and career Army officer and is driven by a deep commitment to supporting veterans in their transition journey. Since transitioning from 20 years of military service in 2015, JC has been a founder and partner of two leadership companies, where his clients included Fortune 500 companies, international non-profit organizations, government agencies, the NFL, numerous NFL and NBA teams, and multiple NCAA programs.
Over the course of his Army career, JC spent over seven years in the Ranger regiment, serving in two Ranger Battalions as well as Regimental Headquarters, participating in the Best Ranger Competition twice, and has over seven and a half years of command time with 11 operational and combat deployments to Haiti, Bangladesh, Iraq, and Afghanistan. JC is the author of two books, including A Light in the Darkness: Leadership Development for the Unknown. In 2017, he was selected as a TEDX Speaker and delivered Rethinking Leadership at TEDX Hammond. JC is also an adjunct professor at St. John’s University in Queens, New York. He holds a degree in Political Science from the University of Rhode Island and is a Liberty Fellow, part of the Aspen Institute.
As the Voice of the Veteran Community, The Havok Journal seeks to publish a variety of perspectives on a number of sensitive subjects. Unless specifically noted otherwise, nothing we publish is an official point of view of The Havok Journal or any part of the U.S. government.
Buy Me A Coffee
The Havok Journal seeks to serve as a voice of the Veteran and First Responder communities through a focus on current affairs and articles of interest to the public in general, and the veteran community in particular. We strive to offer timely, current, and informative content, with the occasional piece focused on entertainment. We are continually expanding and striving to improve the readers’ experience.
© 2026 The Havok Journal
The Havok Journal welcomes re-posting of our original content as long as it is done in compliance with our Terms of Use.