Within the decade, AI became a utility, delivered via the Internet of Things, often by verbal interface. Your AI served you as much IQ as you wanted but no more than you needed. Like all utilities, AI turned out to be supremely boring, even as it transformed the Internet, the global economy, and civilization
. This utilitarian AI also augmented us individually as people (deepening our memory, speeding our recognition) and collectively as a species. Today in 2025, we’ve seen at least 10,000 startups whose business plan was Take X and add AI
. The IC even adopted some of them. Sure, we have to find specialized repairbots for our dumb office refrigerators and microwaves, but our office chairs push themselves in and out t
o aid our AI-managed roombas that replaced the human charforce.
At the same time, elements within the IC returned to its OSS roots in cognitive science. Bot just for PSYOPS anymore! The leaders realized that humans think differently than AIs in important ways. We humans ascribe elusive context to variables machines can only quantify. Among the many ranges of human genius, tolerating ambiguity
and making winning leaps of intuition cannot be replicated by even the most complex neural network. AIs have almost limitless memory and do huge math at speed, but can’t figure out what to work on, or which other intelligences to make use of. Most importantly for the IC, human intuition outperforms
the brute force computational power of an AI in many social situations, such as war and espionage, where there is opposition and goals
. We have the creativity to direct ourselves and our AIs, selecting where to best focus our attention and often applying orthogonal thinking. We also have empathy (at least some of us do), a factor that can’t be overlooked in real-world decision-making. Victory in war or intelligence at the start of the 21st Century, depended on limited and fragile humans operating complex sociotechnical systems
that left little room for error
. Teaming humans and machines together achieved the best of both worlds.
2015: The Inflection Point
Understanding the best organization, fusion, and direction for human–machine systems had preoccupied the U.S. defense-industrial complex ever since the Cold War
. As well it should: the DoD is naturally uneasy about letting robots decide to use lethal force.
In 2015 the Department of Defense, under the guidance of Deputy Secretary Bob Work, paved the way for the IC’s use of centaurs by incorporating them into the Third Offset Strategy
, taking advantage of the potential for human and machine to be far more effective together than either would be alone.
The term “offset strategy” was coined in the 1970s to describe a situation where the US couldn’t match Soviet numbers, so it would have to “offset” them with superior quality and technology. We needed a third one
to address the emerging conflict zones of cyberspace and outer space, and the DoD adopted an approach that relied not just on technology but on the one American advantage China couldn’t simply copy or steal: our people. (Well, at least not yet
“It’s actually not an either-or,” military futurist Paul Scharre
said in 2016. Like the mythical centaur, we can harness inhuman speed and power to human judgment. We can combine “machine precision and reliability, human robustness and flexibility.” Scharre was then head of the Center for New America Security’s 20YY Future of Warfare Initiative
, which was founded by then-CNAS chief executive officer Bob Work, who co-wrote its inaugural study
If that CV seems like the usual Beltway echo-chamber-quasi-nepotism, you might use your human intelligence to draw a parallel with centaurs, in which the AI presents the human with two or three options. If you’ve been around Washington long enough, you know that the “decisionmakers” are often puppets of their own staff, who determine which options make it to the boss’s desk in the first place and then put a heavy thumb on the scales of which is best
. Won’t the tail wag the, uh, centaur? It’s a risk, so we’ve had to develop AI training
to make sure the human isn’t just a rubber stamp for the computer. It seems to be working well – RUMINT says that POTUS is considering replacing the NSC with a centaur. #kiddingnotkidding
While there are fewer human analysts across the IC than there were back in 2016, production has increased in both quality and quantity. Centaurs harness the complexity of today’s rapidly evolving environment and track the goals and actions of our ever-shifting adversaries. Analysis centaurs provide assessments to synthesis centaurs, who make the most probable connections. Author centaurs write the finished intelligence, with the AIs drafting and editing
and the humans doing the final pass.
Of course, the other Beltway risk, of policymakers opting against the best choice, or even the top three, for personal political gain, is still and always a risk factor.