Lt. Gen. John “Jack” Shanahan does not talk about artificial intelligence like a science fiction prophet. He talks like a software reformer who had to get something working by the end of the year.
“The reality is far less cinematic and far more useful,” Shanahan said in an interview ahead of his Ruth Pauley Speakers appearance at BPAC, where he will discuss Artificial Intelligence and the Digital Revolution: The Implications for National Security.
Shanahan retired from the Air Force in 2020 after a 36-year career. He served as the inaugural director of both Project Maven and the Joint Artificial Intelligence Center (JAIC), efforts he describes as “AI startups inside the Department of Defense.” The phrase captures the urgency and resistance he encountered trying to bring commercial-grade software into one of the world’s largest bureaucracies.
The push began under then-Defense Secretary Ash Carter, who warned that the Pentagon was missing the software revolution taking off in Silicon Valley. A bridge was built through the Defense Innovation Unit, but the turning point came when Shanahan’s team confronted an uncomfortable truth: machines were collecting more data than humans could process.
“We called it a ‘catastrophic success,’” Shanahan said. “Analysts were staring at screens for 10 to 12 hours a day. Maybe two of those hours had life-or-death consequences. The rest was mind-numbing.”
Shanahan sent a small team to search for solutions. The answer wasn’t in a classified lab. It was in the commercial market. “Inside the department, the best research was years from being fielded,” he said. “Meanwhile, industry already had pieces we could adapt.”
After a demonstration for Deputy Defense Secretary Bob Work, Project Maven was approved with an aggressive goal: get useful AI into operators’ hands within the calendar year.
That deadline forced a cultural shift. “Many people ignored us. Some resisted,” Shanahan said. “But the mission was simple: help the user now, not in 2030.”
For those who imagine drones making independent decisions, Shanahan is quick to clarify. “Ninety percent of this work will never make a headline,” he said. “It’s the boring stuff that makes organizations better: predictive maintenance on helicopter engines, digitizing millions of pathology slides, business systems that finally pass audits, suicide-risk awareness models to get help to the right people faster.”
The framework, he explained, is “augment, accelerate, automate,” in that order. “This is humans with machines, not humans replaced by machines,” he said.
Still, the smaller slice of work that does grab headlines – autonomous systems and weapons – demands careful public discussion. “There are opportunities, like reducing civilian casualties by improving target identification,” he said. “There are also real risks. The public deserves to understand both.”
Project Maven collided with that debate when Google employees protested the company’s involvement. Shanahan took the criticism seriously. At the JAIC, he created a Responsible AI division and helped draft the Defense Department’s AI ethics principles. “That was the easy part,” he said. “Implementing them is the hard part. What does ‘governable’ mean in code? What does ‘equitable’ look like for a model? You have to design ethics in; you can’t tape them on later.”
He describes the slower pace of defense adoption as a mixed blessing. “It gives a little more time to think through consequences,” he said. “But the pace is speeding up. We can’t outrun responsibility.”
On the global stage, Shanahan avoids “race” metaphors but is clear about the balance of power. “The U.S. has been dominant for a long time: talent, compute, companies born digital,” he said. “China is catching up fast. Russia is behind both.” The imperative, he argued, is to keep pushing forward without racing to the bottom on safety and ethics.
Predicting the next 10 to 20 years is difficult, he acknowledged. “It’s hard to predict the next year,” he said, pointing to how quickly large language models went from research papers to everyday tools. Still, he sees a convergence of embodied AI: better reasoning models, software agents, and robotics. “You give high-level intent, and systems can plan and act in the physical world crossing the digital-to-physical divide,” he said. “That’s thrilling and a little unnerving.”
For Shanahan, the bigger takeaway is civic, not technical. “We live in a democracy,” he said. “Technology should advance with the consent of the governed, not just the governors. The Defense Department must be more transparent about what we’re building and why. Communities like this deserve to ask hard questions and shape the answers.”
After retiring, Shanahan earned a master’s degree at NC State and now serves on university and national advisory boards. He will drive down from Raleigh for his Ruth Pauley Speakers talk, where he hopes to move the conversation beyond science fiction and into practical reality.
“AI for national security isn’t science fiction,” he said. “It’s software: messy, imperfect, powerful. If we do it right, it augments people, accelerates what matters, and automates the mind-numbing. It can reduce harm as much as it increases capability. But we have to stay engaged.”
Shanahan will speak Tuesday, Oct. 21, 7 p.m. at BPAC as part of Ruth Pauley Speakers. Tickets are available now at RuthPauley.org and TicketMeSandhills.com.