From Computing to Cognition: Integrating AI into US Defense Education and Strategic Planning



Disclaimer:
This research project uses data derived from open-source materials like public intelligence assessments, government publications, and think tank reports. This report is based solely on my personal insights, hypothetical scenarios, and independent analysis. It does not contain any sensitive or classified information and does not reflect the views of my employer. This report's purpose is to serve as an exercise in research, analysis, and critical thinking. 

Purpose: This paper argues for the reframing of AI as a strategic tool, not an existential threat, and outlines how US defense education institutions must evolve to prepare future leaders for operationalizing AI in national security environments. 

Executive Summary: Artificial intelligence (AI) is transforming the strategic, operational, and educational dimensions of national defense. While public discourse often gravitates toward extremes, the reality is more pragmatic: AI is becoming foundational infrastructure in modern warfare. As such, the Deportment of Defense (DoD) and its professional military education (PME) institutions must adapt to cultivate leaders who understand, integrate, and govern AI systems effectively.

This paper argues for a shift in how AI is conceptualized within defense circles. Drawing historical parallels to the role of ENIAC during World War II, I contend that AI should be seen less as an independent cognitive entity and more as a strategic enabler - one that augments decision-making processes across all echelons of command. The report outlines current defense applications of AI, analyzes institutional barriers to integration within PME, identifies governance challenges, and positions AI literacy as a cornerstone of future competitive advantage.

Key recommendations include embedding AI case studies and simulations into curriculum, developing interagency and industry-academic partnerships, and enforcing principles of explainability and human-in-the-loop oversight. Ultimately, preparing warfighters and strategists for the AI era requires a comprehensive modernization of defense education grounding in technical fluency, ethical judgement, and operational relevance.


Introduction: AI has rapidly moved from theoretical construct to operational reality. Once confined to academic laboratories and speculative fiction, AI now underpins critical functions in logistics, intelligence, command-and-control (C2), and cybersecurity. As the US and its adversaries invest heavily in AI for strategic advantage, the defense community must make a pivotal choice: will AI be treated as a black box novelty managed by contractors, or as a core component of national defense doctrine managed by trained leaders?

This paper adopts a strategic lens to answer this question, using the legacy of early computing - particularly ENIAC's wartime role - as a historical analogue. Just as ENIAC revolutionized how ballistic trajectories were computed, enabling faster and more precise battlefield decisions, AI today offers unprecedented opportunities to extend cognitive reach. But the key to unlocking this potential lies not just in technology, but in human leadership.

The central thesis is that AI must be embedded into defense education as both subject and tool. PME institutions need to produce not only tacticians and strategists, but also technologically literate leaders who understand AI's strengths, limitations, and ethical implications. By framing AI as infrastructure we position it where it belongs: at the heart of 21st century defense readiness.

The sections that follow will explore the evolution of AI narratives, real world applications in defense, barriers to educational integration, risk governance, and the implications of strategic competitive in the age of AI.


Public Fears and Dystopian AI Narratives

Public discourse around AI often gravitates toward sensational fears - from Hollywood's Terminator-style takeovers to worries of mass unemployment. Surveys have shown that a majority of American approach AI with trepidation. For example, in 2023 Pew [1] found 52% of US adults were more concerned than excited about growing AI use (versus only 10% more excited).


Figure 1: Pew Research Center Survey, 2023

Common public concerns include, but are not limited to:

  • Existential "AI Takeover" Scenario: Dystopian scenarios loom large. In one poll, 63% of US adults voiced worry that AI could lead to harmful behavior, and a similar share feared AI systems might "learn to function independently from humans" [2]. Over half (55%) even believed AI could pose a risk to the very existence of the human race. Such views reflect the enduring influence of science fiction tropes. The 1984 film The Terminator, for instance, "popularized fears of unstoppable machines" and cemented the notion of AI as an existential threat in the public imagination [3]. Some decades later, its imagery of a rogue superintelligence (Skynet) remains shorthand for AI doom in media narratives. 
  • Mass Unemployment and Social Disruption: Another prevalent fear is that AI and automation will displace human workers on a massive scale. Among Americans more concerned than excited about AI, the risk of people losing jobs is the top reason for their concern. As an example, about 83% of Americans expect that driverless vehicle adoption would eliminate jobs like rideshare and delivery drivers. This anxiety extends itself beyond blue-collar work with white-collar workers also worrying that advances in generative AI could render their skills obsolete. Media coverage often highlights these scenarios of AI-induced economic upheaval, reinforcing public apprehension that "the robots" will leave humans unemployed.
  • Loss of Human Control and Ethical Misuse: People also fear humans could lose control over AI systems, leading to unpredictable or unethical outcomes. High-profile AI incidents and dystopian portrayals have primed the public to be wary of autonomous decision-making. In surveys, large majorities express concern that increasing AI use will erode privacy or be deployed in ways they are not comfortable with. Ethical campaigns have seized on these fears - for instance, advocacy groups invoking “killer robot” imagery push for bans on lethal autonomous weapons, tapping into public unease about machines making life-and-death decisions [4]. The vivid narrative of a moral boundary crossed by ungoverned AI resonates widely, even if actual military policy still mandates human oversight of use-of-force decisions.
These dystopian or exaggerated perceptions are amplified by popular media and entertainment. While they reflect genuine concerns, they often overshadow the more mundane reality of what current AI can (and cannot) do. The result is a public narrative skewed toward worst-case scenarios - one that stands in stark contrast to how strategic decision-makers view AI.

Defense Strategists' Perspective: AI as a Tool, Not a Terminator

Great catchline, I know. At the strategic level - particularly within U.S. defense and national security circles - artificial intelligence is predominantly seen as a force multiplier and necessary enabler, rather than a sentient threat. The Department of Defense (DoD) views AI as a technology to be harnessed in order to maintain a competitive edge. The Pentagon’s official strategy frames AI as transformative in augmenting human capabilities and improving military effectiveness, not replacing human judgment outright [5]. Key leaders emphasize integration over fear:

  • Maintaining a Competitive Edge: The DoD’s Third Offset Strategy explicitly aimed “to exploit all the advances in artificial intelligence and autonomy and to insert them into the Defense Department’s battle networks” as a means to preserve U.S. military superiority [6]. Rather than dwelling on speculative dangers, defense planners focus on how AI can change the character of warfare to the U.S.’s advantage. The 2018 National Defense Strategy anticipated that AI will significantly alter warfighting, and accordingly officials like Lt. Gen. Jack Shanahan (first director of the Joint AI Center) argued the United States “must pursue AI applications with boldness and alacrity” to retain strategic overmatch. In this view, failing to embrace AI is the bigger risk, as adversaries racing ahead in AI could threaten U.S. security.
  • AI as a Practical Enabler: Inside the Pentagon, AI is treated as a suite of powerful tools - from data-crunching algorithms to intelligent decision-support systems - that can streamline operations and enhance human decision-making. Officials stress that current AI is narrow and task-specific, not an all-powerful brain. For example, the Joint Artificial Intelligence Center (JAIC) was established in 2018 specifically to accelerate the DoD’s adoption and integration of AI across missions [7]. JAIC’s mandate has been to serve as an AI center of excellence providing resources and expertise to military units, underlining that AI’s role is to assist warfighters and analysts. As JAIC Director Lt. Gen. Michael Groen put it, “We seek to push harder across the department to accelerate the adoption of AI across every aspect of our warfighting and business operations”. This illustrates the prevailing mindset that AI is a general-purpose capability to be infused into logistics, intelligence analysis, maintenance, training, and other domains to make the force more effective and efficient.
  • Augmentation, Not Autonomy Run Amok: Defense leaders are generally cognizant of public fears and have repeatedly clarified that their pursuit of AI is not about ceding control to machines. DoD policies (such as directives on autonomous weapons and the 2020 AI Ethical Principles) insist on meaningful human oversight of AI-driven systems. In practice, the military’s near-term AI projects are largely focused on decision support, automation of tedious tasks, and optimizing workflows - far from Hollywood’s rogue robots. As one Navy official noted, much of AI’s impact will come through “mundane applications… in data processing, analysis, and decision support,” rather than any dramatic battlefield androids. The internal narrative frames AI as a collaborative technology: an aid to human operators that can sift intelligence faster, predict maintenance needs, or simulate scenarios - ultimately empowering human decision-makers, not displacing them. This perspective stands in stark relief against the “AI takeover” trope; instead of fearing AI’s agency, defense strategists worry about not using AI enough to keep pace with rivals.
In summary, U.S. defense decision-makers tend to regard AI as a critical enabler to be integrated responsibly into military and security operations. The emphasis is on opportunity - leveraging AI to enhance national security - tempered by pragmatic risk management (ensuring reliability, ethics, and control), rather than on existential danger. This measured, tool-oriented outlook differs markedly from public dystopian narratives, focusing on AI’s strategic utility rather than its threat to humanity.

Think Tank Perspectives: Weighing Risks Versus Strategic Integration

Leading national security think tanks and research centers (RAND, CNAS, CSET, and others) have analyzed AI’s implications and generally echo the need to avoid hyperbole. Their reports often strike a balance - acknowledging legitimate risks from military AI, yet cautioning against exaggerated fears that could hinder innovation. Several consistent themes emerge from expert analyses:
  • AI as Transformative, but not Apocalyptic: Analysts note that while AI will shape the future of warfare, it is better understood as a continuum of technological evolution rather than a revolution that overnight yields super intelligent machines. A recent Center for a New American Security (CNAS) study argues that comparisons to an “AI arms race” are overblown - in reality, military adoption of AI today “looks more like routine adoption of new technologies” in line with the decades-long trend of incorporating computers and networking into forces [8]. In other words, there is momentum behind AI integration, but not the kind of breakneck, uncontrolled spiral that sci-fi scenarios or headlines might suggest. The report underscores that current military AI is a general-purpose technology akin to an improved computer, not a doomsday weapon in itself.
  • Concrete Risks: Safety, Bias, and Escalation: Think tank assessments tend to focus on tangible risks that come with deploying AI - e.g. system failures, vulnerabilities, or inadvertent escalation - rather than speculative sentience. A RAND Corporation analysis of military AI highlighted issues like reliability in high-stakes contexts and the need for testing to prevent accidents [9]. Similarly, CNAS has pointed out the risk that flawed AI could misidentify threats or act unpredictably in complex environments, which could increase the chance of accidents or even unintended conflict if not managed. These are serious concerns, but notably within the realm of technical and strategic problem-solving - addressable by policy, human oversight, and international norms - as opposed to uncontrollable AI revolt. By highlighting such issues, experts aim to ensure integration is done responsibly, without invoking a need to halt AI advancements altogether.
  • Strategic Integration as Imperative: On the whole, expert communities frame AI as an indispensable element of future national security, one that must be integrated strategically and swiftly. The consensus is that the U.S. cannot afford to fall behind in AI adoption, given competitors like China investing heavily in military AI. For instance, a RAND report on DoD’s AI posture emphasized scaling up AI experiments and talent to maintain U.S. tech superiority. Think tanks frequently describe AI as a “general-purpose technology” that will underpin intelligence analysis, cybersecurity, logistics, and more - a foundation for military power much like electricity or the internet. As such, their recommendations often focus on accelerating AI integration (through funding, R&D, public-private partnerships) while instituting safeguards (ethical guidelines, testing regimes, confidence-building measures internationally) rather than entertaining the idea of slowing or banning military AI outright.
In think tank narratives, there is an implicit push to reframe the conversation about AI in national security. Rather than viewing AI itself as the threat, the emphasis is on the risk of misusing or not using AI. Experts urge policymakers to mitigate the real risks - such as unintended escalation or AI failures in weapons - through norms and oversight, but at the same time to push beyond public fear-based reluctance so that beneficial AI applications are not lost. This balanced perspective reinforces the notion that AI, handled correctly, is a net strategic enabler, not a harbinger of doom.

Narrative Gaps in Policy, Investment, and Education

The divergence between public fears and defense-sector views of AI has tangible effects on policymaking, defense investments, and even the education of the national security workforce. A threat-centric narrative can create frictions - from public resistance to military AI projects, to slowed adoption - whereas an enabler-centric narrative could foster more proactive policy and innovation. Several notable impacts of the differing narratives include:

  • Public Opinion Shaping Policy Debates: Heightened public fear of AI can translate into political pressure for restrictive policies. Lawmakers attuned to their constituents’ dystopian anxieties may call for strict regulations or bans on certain AI uses (e.g. autonomous weapons) before the technology is fully understood. For instance, the visceral “killer robot” trope has fueled campaigns at the United Nations to ban lethal autonomous systems preemptively. While ethical in intent, such moves - driven by worst-case imagery - could limit the military’s ability to develop AI for defensive or benign uses (like active protection systems) if not carefully negotiated. On the flip side, when expert communities and defense leaders advocate AI as a strategic necessity, they push for policies that invest in AI R&D and set guidelines for responsible use rather than prohibition. This tug-of-war between dystopian narratives and strategic imperatives plays out in policy forums. The outcome can affect everything from budget allocations to the rules governing AI development. A climate of fear might spur oversight (e.g. Congressional hearings grilling AI programs for potential dangers), whereas a reframed narrative highlighting AI’s national security benefits could build public and bipartisan support for sustained investment.
  • Tech Industry Engagement and Investment: The narrative gap also directly impacts collaboration between the government and the tech industry - a critical relationship for defense AI innovation. A stark example was Google’s withdrawal from the Pentagon’s Project Maven in 2018 after employee protests. Google engineers, influenced by concerns that their work on AI could contribute to lethal drone operations, argued it ran afoul of the “Don’t be evil” ethos. Facing internal revolt and public criticism, Google opted to cancel its AI contract with DoD [10]. This incident sent shockwaves through the defense community. It demonstrated how a workforce steeped in dystopian AI fears or moral concerns can impede defense AI projects, even those aimed at non-lethal tasks like imagery analysis. The MITRE Corporation analyzed this rift and noted that thousands of tech employees objected to their companies partnering with the military, perceiving it as “going against their values”. Similar pushback hit other firms (Microsoft, Amazon) in cases where AI or tech contracts for defense raised alarm among staff. The result is a chilling effect on defense tech investment: companies become hesitant to bid on AI programs that might spark public relations issues or staff resignations. This dynamic hampers DoD’s access to top AI talent and tools. Defense strategists recognize that sustaining U.S. military AI leadership requires close cooperation with the private sector (which leads in AI innovation) - but that cooperation is harder to forge when the public narrative paints such work as contributing to dystopia. Bridging this gap is thus seen as essential for investment and innovation.
  • Defense Education and Talent Development: Within military and defense educational institutions, there is a concerted effort to counter hype and fear with sober, informed understanding of AI. Leaders acknowledge that some segments of the public - and even the workforce - are uneasy about AI.  address this, defense educators are reframing the narrative for the next generation of officers and analysts. A U.S. Naval War College conference in 2019 was pointedly titled “Beyond the Hype: Artificial Intelligence in Naval and Joint Operations,” aiming to dispel misconceptions and highlight practical applications of AI as a tool. Scholars and military practitioners at that event discussed real-world use cases and limitations of AI, rather than science-fiction fantasies, implicitly teaching that AI is a technology to be mastered, not feared. Likewise, the DoD has launched AI education initiatives to raise the baseline knowledge across the force. The 2020 DoD AI Education Strategy called for integrating AI into professional military education curricula and training programs, ensuring personnel have a basic grasp of AI capabilities and ethics. This not only prepares the workforce to use AI effectively, but also helps inoculate them against sensationalized notions. By normalizing AI as another subject of proficiency - alongside cybersecurity or electronics - the defense community is building a culture that views AI rationally and focuses on operational advantages and safeguards. In short, defense education efforts seek to narrow the narrative gap by producing leaders who can engage with AI’s opportunities and risks in a nuanced way, rather than defaulting to pop-culture driven extremes.
The effects of narrative are thus self-reinforcing. Public fears, if unaddressed, can slow or skew policy and scare off key partners, which in turn could hinder the U.S. from fully leveraging AI for security. Recognizing this, many defense stakeholders argue that winning the “hearts and minds” on AI - both within the force and among the public - is becoming as important as the technology itself. This sets the stage for reframing AI’s role in national security.

Reframing AI as a Strategic Enabler

Given the evidence, a clear lesson for the defense community is the need to shift the narrative on artificial intelligence from one of looming threat to one of strategic enablement. The goal of such reframing would be to align public perception with the reality that AI, managed correctly, is a tool that can enhance security and prosperity, not an out-of-control adversary. Support for this reframing argument is found in both policy analysis and practice:
  • Emphasizing Benefits and Mission Outcomes: Defense agencies are beginning to tell a more positive, concrete story about AI’s role. Rather than speak in abstractions, they highlight how AI can save lives by improving search-and-rescue, or how it reduces routine workload for troops. This kind of messaging helps the public and Congress see AI as directly contributing to safer, more effective military operations. A MITRE study in 2020 specifically urged DoD leaders to communicate a compelling narrative about “the value of defending the country with honor” using modern technologies like AI, and to stress the Department’s commitment to ethical deployment of these tools.  showcasing adherence to ethics and human oversight, the Pentagon can alleviate fears of ungoverned AI. For example, DOD’s adoption of AI is often coupled with a Responsible AI framework - sending the message that the U.S. will use AI in line with its values, not as a reckless killer robot. Making such assurances public and transparent can build trust and counteract dystopian impressions.
  • Bridging the Cultural Divide: AI as an enabler also involves closing the gap with the tech sector and general workforce. This means engaging Silicon Valley and young technologists on shared values and national security needs. Success stories of AI-public sector collaboration are being lifted up to change minds. For instance, highlighting how an AI tool developed by a tech firm helped U.S. forces deliver aid more efficiently, or how a machine-learning model is saving maintenance costs in the Air Force, can illustrate AI’s positive impact. Think tanks and industry leaders suggest that public-private partnerships on AI should be promoted in the narrative  - to show that working on defense AI can be a force for good, protecting soldiers and civilians alike. The hope is that as more technologists see AI projects in defense yielding constructive results (and not just weapons), the stigma diminishes and investment flows more freely. In tandem, DoD is adjusting its own messaging to be more receptive to ethical concerns, rather than dismissive. Instead of waving away protests, defense advocates are increasingly acknowledging the need to earn trust. This cultural dialogue is part of reframing AI as a shared mission for security, as opposed to a government venture that the public should fear.
  • Aligning Narrative with Reality: Fundamentally, the reframed narrative must continually point out that the “science-fiction” view of AI is misaligned with the current reality. As experts note, most military AI systems are more akin to smart assistants than independent actors. Driving this point home can correct misperceptions. The contrast between a fictional Skynet and real-world AI applications (like predictive maintenance algorithms) is stark - a reframed narrative leverages that contrast to reduce undue alarm. Defense educators and communicators therefore stress separating fact from fiction: acknowledging genuine AI-related risks (e.g. algorithm bias or adversary use of AI for disinformation) but clarifying that these are challenges manageable through policy and engineering, not reasons to halt progress. As it has been put before, even AI-enabled weapons "lack the malevolent sentience of Skynet," and keeping humans in the loop is the prudent path - so we should focus on maintaining control and ethics rather than fearing an uprising. This kind of messaging directly tackles the Terminator mythos, reframing the issue around human responsibility and strategic advantage.
In conclusion, repositioning AI in the public and policy narrative as a strategic enabler - a powerful tool under human direction - is critical for the United States to fully benefit from the AI revolution in defense. The chasm between public fear and military optimism can be narrowed by education, transparency, and consistent examples of AI’s value. Strategic-level decision makers and thought leaders increasingly advocate this reframing because they recognize that without public buy-in and understanding, even the best AI technology may fail to be adopted. The background evidence presented here supports the argument that AI is not an autonomous menace to be halted, but a strategic asset to be guided and governed wisely. Reframing the narrative in this way can help ensure robust policymaking, sustained investment, and an informed defense workforce - all oriented toward integrating AI in service of national security, responsibly and effectively.

Operational Use of AI in the US Defense Sector

AI technologies are already being fielded across multiple domains of U.S. defense operations, enhancing everything from intelligence analysis to maintenance and cybersecurity. One high-profile example is Project Maven, launched in 2017 as the Department of Defense’s “Algorithmic Warfare” initiative. Project Maven uses machine learning to process the vast streams of drone surveillance video and satellite imagery to identify potential targets with far greater speed than traditional methods [11]. By rapidly classifying objects (e.g. distinguishing hostile tanks from civilian trucks) and integrating those insights into battlefield command systems, Maven dramatically compresses the kill chain. Human operators remain in the loop to validate targets, but the AI enables them to go from analyzing only ~30 targets per hour to as many as 80, according to some reports [12]. Deployed in conflict zones like Iraq, Syria, and Yemen, Maven has proven its value by narrowing target lists for airstrikes and even helping U.S. Central Command locate enemy rocket launchers and vessels in the Middle East. These real-world results illustrate how AI can increase operational tempo and precision in intelligence, surveillance, and reconnaissance (ISR) missions, augmenting human analysts and decision-makers.

To scale such successes across the force, the Pentagon stood up the Joint Artificial Intelligence Center (JAIC) in 2018 (now reorganized under the Chief Digital and AI Office) with a mandate to accelerate AI adoption for “mission impact at scale” [13]. The JAIC coordinated DoD-wide AI efforts, developing prototypes in areas like predictive maintenance, humanitarian assistance, and warfighter health, and ensuring that lessons learned in one military service could benefit others. For example, in the realm of predictive maintenance, the Air Force’s Rapid Sustainment Office worked with industry to deploy an AI-based Predictive Analytics and Decision Assistant (PANDA) platform as a new “system of record” for aircraft maintenance [14]. PANDA aggregates data from aircraft sensors, maintenance logs, and supply records, then uses machine learning models to predict component failures and optimal maintenance scheduling. This data-driven approach has measurably improved readiness: in one case involving the B-1 bomber fleet, an AI predictive maintenance tool completely eliminated certain types of unexpected breakages and cut unscheduled maintenance labor by over 50%. These efficiencies translate to higher aircraft availability and operational reliability - a clear example of AI acting as a force multiplier for logistical and sustainment activities.

AI is also bolstering U.S. capabilities in less visible but critical domains such as cyber operations. Modern cyber defense involves monitoring enormous volumes of network data and responding to threats in milliseconds. Here, AI algorithms help identify anomalous patterns and intrusions far faster than human operators alone. Military cyber units are experimenting with machine learning systems that flag suspicious network behavior and even autonomously execute initial countermeasures. As one Army Cyber Command technology officer observed, AI is beginning to shift the advantage to the defender in cyberspace, partially countering the traditional dominance of offense [15]. Fast-running AI detection tools can contain attacks or malware in real time, making it “much harder for the offensive side” to succeed. At the same time, strategists recognize that AI is a dual-edged sword in cyber warfare: the same technology could enable more sophisticated phishing, deepfake-induced misinformation, or automated hacking by adversaries. This has prompted the DoD to invest in AI for cybersecurity while also researching defenses against AI-driven threats.

Across the services, a variety of other AI applications are moving from pilot projects into operational use. The Navy and Coast Guard, for instance, have begun employing computer vision algorithms to scan satellite and radar data for illicit maritime activities (such as smuggling or illegal fishing) that previously went unnoticed [16]. The Army is testing AI-enabled battle management systems that fuse sensor inputs to recommend battlefield courses of action, effectively providing decision support to commanders. Even the U.S. Special Operations community has embraced AI tools for tasks like ISR analysis, language translation, and mission planning. In 2023, U.S. Special Operations Command pivoted towards aggressive adoption of AI, open-sourcing certain software and pushing deployment to the tactical edge [17]. Leaders at SOCOM rate their recent progress as substantial, but acknowledge more work is needed to integrate AI into legacy systems and train personnel to use these tools effectively. Such case studies - from Project Maven’s target recognition to PANDA’s maintenance forecasting and cyber anomaly detection - underscore that AI is no longer just a theoretical future capability. It is already enhancing operational readiness and efficiency across the U.S. defense enterprise, augmenting human warfighters in handling the growing speed and complexity of modern military missions.

From ENIAC to AI: Johns von Neumann's Legacy and the Next Cognitive Revolution

History shows that transformative technologies can radically enhance military capability when paired with visionary integration. A useful parallel to today’s AI revolution is the advent of electronic computing during and after World War II - a revolution epitomized by the work of John von Neumann on the ENIAC computer. Von Neumann, a Hungarian-American mathematician and polymath, was a key figure in the Manhattan Project and an early computing pioneer who recognized the strategic potential of automation in calculations [18]. In 1944, he became involved in the U.S. Army’s ENIAC project (Electronic Numerical Integrator and Computer), which was the first general-purpose electronic computer. ENIAC was initially built to compute artillery firing tables - a laborious task that previously required teams of human “computers” working with mechanical calculators and often struggling to keep up with wartime demands. By automating these computations, ENIAC could perform in seconds what took people hours or days, fundamentally changing the pace of wartime calculations. In fact, one of ENIAC’s first assignments in 1945 was running simulations for the feasibility of the hydrogen bomb, a top-secret program that would have been impractical without electronic computing power [19]. This breakthrough demonstrated how high-speed computing became a strategic enabler, allowing the United States to solve complex problems (like nuclear weapon design and ballistic trajectories) that were previously intractable or painfully slow.

Figure 2: John von Neumann


Figure 3: Two pieces of ENIAC on display at the University of Pennsylvania

John von Neumann’s influence went beyond the engineering of ENIAC; he also conceptualized how computers could serve as cognitive aids to strategists and planners. He pioneered the stored-program architecture (now known as the von Neumann architecture) that underlies virtually all modern computers, and he’s considered a father of game theory - bringing a new mathematical rigor to defense strategy. Under von Neumann’s guidance, early computers were used not only for crunching numbers but also for tasks like weather forecasting and systems analysis, essentially the forerunners of today’s data-driven decision-support systems. The early computing revolution turned what were once human-only intellectual tasks into human-machine collaborative tasks, greatly increasing speed and accuracy. For example, the time to produce complex firing tables or decrypt enemy codes dropped dramatically as machines took over the repetitive calculations. Military planning began to incorporate computational modeling, from logistics to nuclear targeting, augmenting human judgment with machine precision.

Today’s artificial intelligence represents the next phase of cognitive augmentation in warfare - a step beyond what von Neumann’s generation achieved with manual programming and calculation. If ENIAC and its successors gave commanders unprecedented computational power, AI offers something arguably even more profound: the ability for machines to learn, adapt, and assist in decision-making in real time. This can be seen as an extension of von Neumann’s legacy. Just as he envisioned applying rigorous computation to strategic problems, we now envision applying machine learning to dynamic problems like identifying insurgents in a crowd, predicting an adversary’s moves, or optimizing complex logistics under fire. The paradigm shift is similar in scale. In the mid-20th century, militaries that embraced electronic computing leapt ahead in command-and-control, intelligence, and engineering - those that lagged were left at a serious disadvantage. Likewise, in the 21st century, militaries that harness AI for a decision advantage will outpace those that do not. AI systems can sift through sensor feeds, intelligence reports, and battlefield data far faster than any team of staff officers, flagging patterns and anomalies that would otherwise be missed. This human-machine symbiosis has the potential to amplify cognition on the battlefield, much as early computers amplified calculation. It moves warfighting into a realm of information speed and complexity management that von Neumann could only hint at with game theory and primitive computers. In short, AI is positioned to do for perception and reasoning what computing did for arithmetic - enabling a new leap in military effectiveness. The challenge, as with ENIAC, is to integrate this technology wisely, guided by strategic leaders who understand its potential. In that sense, reframing AI from a feared threat into a force multiplier echoes von Neumann’s own advocacy for embracing new technology to secure a competitive edge in national security.


Implications for Defense Education and Talent Development

Realizing AI’s potential as a strategic enabler will require a profound transformation in defense education and training. Future military leaders must be as comfortable with algorithms and data as past generations were with maps and compasses. This means Professional Military Education (PME) institutions - service academies, staff colleges, war colleges, and technical schools - are updating curricula to build AI literacy at all levels. AI literacy involves understanding the basics of how artificial intelligence works, its applications and limitations, and being able to critically evaluate AI-enabled systems [20]. As one recent study on PME integration argues, AI literacy among faculty and students is now a “strategic imperative” to prepare officers for an AI-driven battlefield. Concretely, courses on topics like data science, machine learning fundamentals, and human-machine teaming are being introduced alongside traditional strategy and leadership classes. For example, the Naval Postgraduate School has launched an “Artificial Intelligence for Military Use” certificate program that educates military professionals on key AI concepts and applications, from sensors and imagery analysis to war-gaming and logistics [21]. Notably, this program does not require a coding background - reflecting an understanding that even non-technical officers need a working knowledge of AI to make informed decisions about procurement and deployment. Similar initiatives are underway at other institutions, aiming to produce officers and DoD civilians who can bridge the gap between operators and data scientists and effectively champion AI projects.

In addition to technical skills, ethical and strategic judgment regarding AI must be woven into the education of military leaders. Just as the ethics of nuclear weapons or cyber operations are covered in curricula, the unique ethical questions posed by AI deserve attention. PME courses are beginning to incorporate case studies on algorithmic bias, autonomous weapons, and the legality of AI-driven targeting under the Law of Armed Conflict. The goal is to instill “ethical AI fluency” - ensuring that officers not only understand what AI can do, but also the moral and legal frameworks guiding its use. Students might debate scenarios, for instance, about an autonomous drone engaging a target without a direct human command, examining how DoD’s AI Ethics Principles (responsibility, equity, traceability, reliability, governability) should apply. By grappling with these issues in the classroom, future commanders and planners will be better prepared to make tough calls about AI employment in the field. They learn that embracing AI does not absolve them of accountability - on the contrary, it requires more educated oversight. The military’s emphasis on leadership with integrity extends into the AI era: an officer needs the knowledge to question an AI recommendation, recognize when the data might be flawed or the algorithm biased, and insist on appropriate human control measures. Thus, courses in ethics, law, and policy are evolving to cover AI, ensuring the warrior ethos and professional norms adapt to include stewardship of intelligent machines.

Another critical aspect of defense education in the AI age is fostering interdisciplinary and interagency training. AI in national security isn’t confined to the Department of Defense alone - it spans the intelligence community, homeland security, defense industry, and academia. Recognizing this, PME institutions and training commands are increasing exchanges and joint learning opportunities. For example, the DoD has partnered with universities (like MIT and others) to offer specialized AI courses to military cohorts, and it convenes events such as the AI Partnership for Defense which bring together allied military officers and defense civilians to share AI lessons learned [22]. On the interagency front, one can envision combined training where military analysts and, say, CIA or NSA analysts learn side by side about applying AI to intelligence fusion - building networks of expertise that span organizational boundaries. Such cross-pollination is vital because the challenges of AI (from data sharing to ethics) often require a whole-of-government approach. A Naval officer who understands how the Department of Homeland Security uses AI for critical infrastructure protection, or an Air Force officer who grasps the FBI’s perspective on algorithmic bias, will be better equipped to collaborate during joint operations and crises.

Crucially, faculty development and leader development programs are adapting to empower this educational shift. Instructors at war colleges and service schools are being encouraged to familiarize themselves with AI tools and concepts so they can mentor students effectively. U.S. Army War College faculty, for instance, documented their experience of gradually integrating AI into their teaching - highlighting that faculty comfort with AI is a prerequisite to student education. Within the operational forces, commanders are also pushing “digital literacy” initiatives down the ranks. A notable example is U.S. Special Operations Command, which recently had about 400 of its leaders complete a six-week MIT-affiliated course on AI and data analytics. The intent is to create a leadership cadre that not only understands the technology but “demands it,” actively pulling AI solutions into the field. This top-down and bottom-up approach to education - from generals to junior officers and enlisted technicians - will cultivate a culture where AI is seen as an essential tool in the arsenal. In summary, defense education is being reimagined for the information age: blending technical literacy, ethical grounding, and joint cooperation to produce military and intelligence professionals who can harness AI’s power responsibly and creatively in service of national security.


Governance and Risk Management of Military AI

As the U.S. military integrates AI into critical operations, robust governance and risk management frameworks are paramount to ensure these technologies remain strategic enablers and not liabilities. The Department of Defense has proactively set guardrails through high-level principles and policies. In 2020, the DoD adopted a set of Ethical Principles for AI, which articulate how AI systems should be developed and used in accordance with the military’s legal and ethical values. These five principles — Responsible, Equitable, Traceable, Reliable, and Governable — now guide all DoD AI projects. In practice, they mean that humans must remain accountable for AI decisions, AI outcomes should be as free from bias as possible, systems should be transparent and auditable, they must be rigorously tested for safety and effectiveness, and there must always be the ability to disengage or shut off an AI system that is behaving unexpectedly. For example, the “Responsible” principle explicitly states that DoD personnel will exercise appropriate levels of judgment and care when deploying AI and will remain answerable for its use. This institutionalizes a “human-in-the-loop” (or at least “on-the-loop”) mandate, ensuring that AI augments human decision-making rather than replaces it in any uncontrolled way.

Implementing these principles requires concrete governance measures. The Pentagon’s Joint AI Center (now CDAO) has been charged as a focal point for coordinating AI ethics implementation, including standing up working groups to develop detailed guidelines and tools for compliance. One focus area is algorithmic transparency - making AI systems as explainable as possible to their human operators. The “Traceable” principle addresses this, mandating that AI technologies be developed such that relevant personnel possess an appropriate understanding of how they work, including insight into training data and logic. This is leading to investments in explainable AI research for defense applications, so that a commander can ask not just “What is the AI recommending?” but “Why is it recommending that?”. For instance, if an AI tool flags a particular vehicle as hostile, commanders want confidence in the basis for that judgment (sensor signatures, behavior patterns, etc.), rather than accepting a “black box” output. Explainability builds trust and helps humans and AI collaborate more effectively - a lesson learned from early deployments like Project Maven, where analysts had to validate AI-generated target cues. It also enables troubleshooting: if an AI system makes a questionable suggestion, engineers and operators can audit the decision process to identify potential biases or errors (aligning with the Equitable principle’s aim to minimize unintended bias).

Risk management of military AI systems spans technical, operational, and strategic levels [23]. Technically, one risk is the reliability and robustness of AI models. In battlefield conditions, data can be noisy, adversaries can attempt to deceive AI (through camouflage, decoys, or cyber means), and systems may encounter scenarios not covered in training. The DoD addresses this through extensive testing and evaluation regimes. Per the “Reliable” principle, each AI capability must have well-defined uses and be tested for safety and effectiveness within those use cases. For example, before an AI-driven target recognition system is fielded, it undergoes trials across different environments (desert, urban, jungle, etc.) to evaluate performance and failure modes. Recent conflicts have provided cautionary tales: simplified AI tools reportedly had mixed results in the Russia-Ukraine war, sometimes misidentifying objects (e.g., classifying heavy machinery as trees or falling for inflatable decoys) when faced with weather or camouflage conditions beyond their original training. Human analysts outperformed these nascent systems in complex scenarios, underscoring that current AI is far from infallible and must be used with human oversight. To mitigate such risks, DoD policy emphasizes continuous operator training and system tuning - AI models should be updated with new data, and users must understand the system’s limitations. Moreover, the “Governable” principle requires that AI systems be designed with the ability to detect and avoid unintended consequences, and crucially, to disengage or deactivate if they start to act anomalously. This is essentially an insistence on a “kill switch” or fallback control for autonomous systems, which is vital in weapons platforms to prevent accidents or escalation. In sum, engineering robust AI means planning for failures: building redundancy, fail-safes, and manual override options into any critical AI-enabled system. 

On the operational and strategic risk front, DoD leaders are aware that AI could introduce new uncertainties even as it solves problems. One concern is the acceleration of decision cycles potentially leading to humans being outpaced. If an AI can identify and recommend engagement with a target in seconds, there’s a risk that command and control might not properly vet actions in time. The U.S. approach to this is “human machine teaming” - using AI to speed up information processing, but still requiring a human decision at the trigger point for lethal force, consistent with DoD Directive 3000.09 which governs autonomous weapons. This aligns with broad expert consensus that human judgment must remain central: RAND researchers, for instance, note a “broad consensus regarding the need for human accountability” in the use of military AI, recommending that responsibility clearly rest with commanders and human control span the entire system lifecycle. Another risk is strategic instability: if one side’s AI gets an advantage, there’s pressure on adversaries to respond quickly (or even preemptively). The DoD is approaching this by coupling its pursuit of AI with confidence-building measures and international dialogue. The U.S. has publicly committed to the lawful, ethical use of AI in warfare and is engaging allies and partners to do likewise. By championing principled AI use, the U.S. hopes to set norms that reduce the risk of inadvertent escalation - for example, by agreeing that humans will supervise any AI that can initiate lethal action, or that early-warning AI systems will be designed to avoid false alarms.

Additionally, governance involves accountability and oversight mechanisms within the military. Just as there are safety boards for accidents, there may be review boards for AI incidents or anomalies. The Defense Department is instituting processes to review AI programs for ethical compliance and is considering certification regimes (analogous to operational test & evaluation for hardware) for AI systems before deployment. The chain of command is being educated that owning an AI tool doesn’t diminish their responsibility for outcomes; if an autonomous vehicle or a decision aid makes a mistake, commanders are expected to investigate and address it just as they would a human error. This is reinforced by the ethical principle that DoD personnel “remain responsible for the development, deployment, and use” of AI. In practical terms, that could mean developing doctrine and TTPs (tactics, techniques, and procedures) for AI use - e.g., specifying that a human must verify an AI-generated target before engagement, or that there be at least two human checkpoints for any fully automated process in live operations.

In summary, U.S. defense planners are actively putting frameworks in place so that AI is used safely, ethically, and effectively. The Pentagon’s approach is one of controlled experimentation: push the envelope with AI to gain its advantages, but do so under strict human oversight, with constant testing, and guided by a strong ethical compass. This governance mindset reframes AI from a feared “black box” risk into a well-supervised partner for the warfighter. It acknowledges risks - technical glitches, enemy counter-AI tactics, legal ambiguities - and seeks to mitigate them through responsible design and policy. With these measures, the U.S. aims to reap the strategic benefits of AI (speed, scale, insight) while upholding the values and control that have long guided the use of advanced technologies in national security.

Strategic Competition and Decision Superiority in the AI Era

Artificial Intelligence has emerged as a central arena of strategic competition, much like nuclear technology or space exploration were in earlier eras. Today, the competition is perhaps most intense between the United States and its near-peer rival China, with profound implications for global security and decision superiority on future battlefields. China has explicitly prioritized AI in its national and military strategy, seeking to become the world leader in AI by 2030 and to transform the People’s Liberation Army (PLA) into a “world-class military” by mid-century, in part through what it calls the “intelligentization” of warfare [24]. A key facet of China’s approach is its policy of Military-Civil Fusion, which marshals the nation’s robust civilian tech sector in direct support of military AI development. Unlike the U.S., where private tech companies and the Pentagon cooperate but are separate, China’s centralized model blurs this line - private AI firms are effectively co-opted into serving PLA needs. This has allowed China to tap advanced research and commercial innovations at speed. In recent years, the PLA has established joint military-civilian AI laboratories, funded tech competitions to encourage dual-use AI innovations, and stood up dedicated units to integrate commercial tech into PLA operations. The results are telling: according to one study by Georgetown’s CSET, the PLA now procures the majority of its AI-related equipment from China’s private tech companies rather than traditional state-owned defense enterprises. In other words, China is harnessing the dynamism of its AI startup ecosystem under a top-down strategic directive - a combination that has yielded rapid progress in areas like facial recognition surveillance, autonomous drones, and AI-assisted command systems for the PLA.

The United States, for its part, is determined not to cede its historical advantage in military technology and decision-making superiority. American defense officials have stated plainly that AI is critical to future military preeminence. A 2024 Army report noted that AI is the one technology that will largely determine which nation’s military holds the advantage in coming decades. This recognition has led the U.S. to craft its own strategy to win the “race” for military AI, albeit by leveraging America’s strengths: innovation, alliances, and a values-driven approach. The U.S. is pursuing what might be termed a “responsible offset” - seeking to out-innovate adversaries in AI while maintaining robust ethics and stability measures. Practically, this involves significant investments in R&D (the Defense Department requested over $1.8 billion for AI/ML in the 2024 budget), new organizational structures like the CDAO to unify efforts, and closer collaboration with the private sector. The Pentagon knows that many cutting-edge AI breakthroughs originate in companies like Google, Microsoft, OpenAI, or myriad startups. Unlike China’s state-driven fusion, the U.S. approach incentivizes cooperation through initiatives such as the Defense Innovation Unit (DIU) and AFWERX/Army Futures Command tech hubs, which aim to fast-track commercial AI tech into U.S. military use. A recent bold initiative is Deputy Secretary Kathleen Hicks’ “Replicator” program, announced in late 2023, which aims to field “multiple thousands” of AI-enabled autonomous systems across multiple domains (air, land, sea) within 18-24 months. Replicator’s goal is to leverage autonomy and AI at scale to counter the numerical advantages that China might deploy in a conflict (for example, swarms of inexpensive drones could act as a force multiplier to blunt a larger naval fleet or saturate an adversary’s air defenses). By rapidly scaling such capabilities, the U.S. seeks to ensure it can offset adversary advantages - much as it did with precision weapons in the past - and complicate any opponent’s war plans.

Decision superiority - the ability to observe, orient, decide, and act faster and more effectively than an adversary (the OODA loop concept) - is a core focus of AI competition. AI has the potential to accelerate the OODA loop to unprecedented speeds. For the side that masters this, AI can provide a decisive edge in command and control. Imagine a future conflict scenario: AI algorithms instantly fuse multi-source intelligence (satellite imagery, electronic intercepts, social media, etc.), identify emerging threats, and present command with optimized courses of action, all in real time. The commander enabled by such AI support can make decisions inside the enemy’s decision cycle, forcing the adversary into a reactive stance. This is essentially what Project Maven and similar ISR AIs foreshadow - compressing a targeting process that once took hours into minutes or less. Faster decision-making, however, is only an advantage if paired with accurate and informed decision-making. Here lies a nuanced competition: it’s not just about acting quickly, but about acting wisely with AI-provided insight. The U.S. is thus investing in AI that improves not only speed but the quality of situational awareness - for instance, AI that can predict an adversary’s next moves or detect subtle patterns in adversary behavior that humans might miss. This could dramatically improve the U.S. military’s ability to anticipate and shape a confrontation rather than just react.

For deterrence, the message that emanates is powerful: a military that can think and act faster across domains can credibly threaten to neutralize an opponent’s actions before they bear fruit. U.S. defense leaders believe integrating AI into the force will bolster deterrence by projecting confidence that America can “prevail on future battlefields” despite challenges. The flip side is that if the U.S. were perceived as lagging in AI, adversaries like China (or Russia) might be tempted to press advantages, thinking the U.S. unable to respond in time. Thus, maintaining a leadership position in AI is seen as critical to preventing conflict as much as winning one. Indeed, a technologically superior force equipped with AI decision-support and autonomy could deter aggression by making any attack plan against it too uncertain or likely to fail.

That said, the AI arms race also carries deterrence dilemmas. One concern analysts note is that when both sides have high-speed, automated decision systems, there’s a risk of escalation if those systems lack sufficient human override. A minor incident could be misinterpreted by an AI as a full-blown attack requiring immediate response, leading to a rapid spiral - a scenario sometimes called “flash war.” Avoiding this requires careful strategy. The U.S. and other responsible powers will need to establish rules of the road for military AI, perhaps new agreements or at least tacit understandings (analogous to Cold War arms control in spirit, if not in formal treaty). Confidence-building measures, like transparency about certain defensive AI systems or hotlines to clarify ambiguities, could mitigate the risk that ultra-fast AI systems push humans out of the loop in crisis decision-making. In the competition with China, this means that even as the U.S. develops AI to maintain superiority, it also seeks dialogue on norms - for example, the Pentagon has indicated interest in talks about AI safety and crisis communications to reduce chances of an accidental clash due to AI misjudgment. Balancing competitive urgency with strategic stability is tricky but vital. The U.S. aims to win the AI race by demonstrating not only better technology but also stronger governance of that technology, thereby persuading allies and neutral countries to align with the U.S. vision of AI-enhanced security rather than China’s. As former Google CEO Eric Schmidt (who chaired the Defense Innovation Board) remarked, U.S. leadership in articulating ethical AI principles shows the world that democracies can adopt AI in defense responsibly. In the long run, this could translate into a coalition advantage - if U.S. allies trust American AI systems and agree on their use, it amplifies collective deterrence against aggressors who might use AI in destabilizing ways.

In conclusion of this competitive landscape: AI is becoming a cornerstone of what strategists term the new “Revolution in Military Affairs.” It promises to reshape how wars are deterred, fought, and won. Both Washington and Beijing know that superiority in AI could mean faster and more precise operations, better coordinated forces, and more resilient systems - in short, an edge in almost every dimension of conflict. The United States, leveraging its open society and innovative economy, is striving to maintain its edge by integrating AI across defense while upholding the rule of law and international norms. China, with its state-driven approach, is rapidly challenging that edge. The outcome of this competition will significantly influence global power balances. Decision superiority in the next conflict may belong to whichever nation can most effectively blend human and artificial cognition into its way of war. For the U.S., the task is ensuring that it is our forces, educated and empowered by AI, that can observe first, understand first, decide first, and act decisively, thereby deterring conflict or ending it on favorable terms if it must be fought.


Conclusion and Recommendations

The exploration from computing to cognition - from John von Neumann’s ENIAC to today’s AI - illustrates a clear thesis: artificial intelligence, managed correctly, is not a menacing “third offset” to be feared, but rather a strategic enabler that the United States can harness to enhance national security. Far from replacing the human element, AI can augment American defense capabilities in profound ways: accelerating decision-making, optimizing resource use, and uncovering insights in oceans of data that would overwhelm human analysts. To fully realize this potential, however, the U.S. must reframe its mindset and approaches. AI should be viewed not as a mysterious black box or a mere buzzword, but as a set of powerful tools - tools that require investment in people, sound governance, and visionary planning to integrate effectively. In short, as this paper has argued, the conversation needs to shift from “How might AI threaten us?” to “How can we smartly leverage AI to stay ahead of threats?”. The following forward-looking recommendations are offered to concrete stakeholders in the defense and intelligence community to drive this shift:
  1. Professional Military Education (PME) Institutions - Build an AI-Ready Force: PME institutions should lead the way in cultivating a force that is literate in AI and comfortable with emerging technology. This means updating curricula continuously to include not just fundamentals of AI, but case studies of its use in warfare, ethical decision exercises, and practical training on AI-enabled systems. Military academies and ROTC programs can introduce cadets to AI through STEM courses and wargames featuring autonomous systems. Intermediate and senior service colleges (like Command and Staff Colleges and War Colleges) should require coursework on technology and innovation, ensuring that future battalion commanders and generals alike can champion data-driven approaches. Faculty development is critical - instructors need opportunities (and incentives) to stay current on tech trends, perhaps via sabbaticals with industry or AI research labs. PME schools can also establish partnerships with civilian universities for joint courses or certification programs in AI (similar to the NPS certificate described earlier). Beyond formal curricula, wargaming and exercises should incorporate AI elements: for example, a joint wargame where officers must employ AI tools for logistics or intelligence and deal with adversary AI capabilities in the scenario. By learning in a sandbox environment, leaders will gain intuition about AI’s strengths and pitfalls. Finally, PME institutions should instill a mindset of lifelong learning in technology - given the pace of AI advancement, one-off education isn’t enough. Officers and NCOs will need continuous refreshers, which could be delivered through online courses, micro-certifications, and periodic tech immersion programs throughout their careers. The outcome sought is a U.S. military ethos that values digital competency on par with marksmanship or tactical acumen, producing leaders who confidently wield AI-enabled capabilities as extensions of their command.
  2. Defense Planners and Policymakers - Integrate AI into Strategy and Force Design: For those in the Pentagon, Joint Staff, and combatant commands who shape requirements, doctrine, and budgets, the mandate is to fully integrate AI considerations into all levels of planning. At the strategic level, this means incorporating AI development goals into defense strategy documents and threat assessments. Planners should routinely ask: How does AI change the game in this mission area? and What must we do to stay ahead? For example, war planners should account for AI-driven enemy tactics and how U.S. forces will counter or exploit them. The deliberate planning process can include red-teaming with AI: use adversarial perspective AI models to simulate how a foe might use AI against us, and develop counters accordingly. In capability development, the Joint Capabilities Integration and Development System (JCIDS) should treat AI and data as critical enablers for every new platform or system. Requirements for a new aircraft or ship, for instance, should explicitly outline how it will leverage AI for maintenance, targeting, or autonomous functions. Resource allocation must back up these priorities - sustained R&D funding for military AI, including investments in test infrastructure (data libraries, simulation environments) and secure, scalable compute resources for the services. Defense planners should also emphasize open architecture and interoperability for AI systems so that different platforms and allies can share data and AI services seamlessly, avoiding stovepipes. Experimentation units (like the Army AI Task Force or Air Force’s Project Arc) should get robust support to prototype and field AI solutions quickly, with feedback loops to doctrine writers. Meanwhile, policy-makers need to refine and publish clear doctrines or concepts of operations (CONOPS) for AI-enabled warfare (e.g., how do we fight with human-machine teams? what is the doctrine for autonomous wingmen drones in an air campaign?). These guidelines will help front-line units incorporate AI tools into their SOPs in a disciplined way. Another key recommendation for defense planners is to continue engaging allies: include AI interoperability and data-sharing agreements in alliance talks (NATO, etc.), conduct combined exercises with AI components, and share best practices on ethics and safety. By shaping international standards proactively, the U.S. and its partners can collectively mitigate risks (like uncontrolled autonomous weapons) and present a united front in the face of adversaries’ AI use. In essence, planners must ensure that AI is woven into the fabric of force design and strategy, not treated as a niche or add-on - it should be as integrated as joint operations doctrine itself.
  3. Federal Intelligence Community Leadership - Leverage AI for Decision Advantage: For leaders in the intelligence agencies (CIA, NSA, DIA, NGA, etc.), AI offers an unprecedented opportunity to enhance analytic capabilities and strategic warning, but it requires bold action to adapt decades-old analytic processes. First, intelligence agencies should accelerate the adoption of AI and machine learning for processing the ever-growing volume of data (“big data”) in espionage and open-source intelligence. This includes deploying AI to automatically transcribe and translate foreign communications, flag anomalies in financial transactions or shipping data, generate summaries of vast social media feeds, and identify patterns in satellite imagery (NGA is already doing some of this with illegal fishing detection, for example). By automating low-level tasks, AI frees human analysts to focus on higher-level judgment and synthesis. Augmented analysis tools - like AI assistants that can answer natural language questions or test hypotheses against data - should become standard issue for analysts, with training on how to use them effectively. Intelligence community (IC) leaders also need to invest in talent: hiring data scientists and computational experts, and upskilling current analysts with data literacy (similar to the military’s efforts). Joint duty rotations between IC agencies and the DoD’s AI units (or even tech companies under appropriate safeguards) could cross-pollinate expertise.

    Moreover, the IC must develop frameworks for evaluating AI-derived intelligence. Analysts are trained in sourcing and skepticism; now they will need tradecraft for evaluating algorithmic outputs (e.g., understanding confidence levels, potential biases in training data, and error rates of AI models). IC agencies might create an “AI validation unit” that rigorously tests analytic algorithms and guards against false positives or adversary deception of our AI. Speaking of deception: intel leaders should assume that adversaries will try to mislead U.S. AI systems (through spoofing, deepfakes, etc.), so counter-deception techniques and deepfake detection become crucial new intelligence disciplines. A forward-looking recommendation is for the Director of National Intelligence (DNI) to champion a National Intelligence AI Strategy that parallels the DoD’s efforts - aligning all 18 IC elements on common standards for AI ethics, data-sharing (within the bounds of law), and rapid technology insertion. Such a strategy could establish centralized resources like a high-performance computing cloud and classified big data repositories accessible to all IC analysts, leveling the playing field so even smaller agencies can use advanced AI tools without massive organic infrastructure. Finally, intelligence leadership should integrate AI into warning and crisis response mechanisms. AI prediction models might help anticipate geopolitical instability or militarization by identifying subtle indicators far in advance. During fast-moving crises, AI decision-support could help senior officials explore scenarios (“If adversary does X, likely responses Y and Z”). However, these tools must be rigorously vetted and always placed under human supervision to avoid overreliance on machine prognostication. The IC’s ethos of considered judgment and avoidance of surprise can be well-served by AI, but only if embraced with the same diligence applied to other intel methods.
  4. Cross-cutting Recommendation - Cultivate a Culture of Innovation and Adaptation: Across PME, defense planning, and intelligence analysis, a unifying recommendation is to foster a culture that prizes innovation, agility, and informed risk-taking with AI. The federal national security enterprise can draw lessons from the tech sector here: encourage pilot projects, allow “fast failure” and learning in controlled environments, and reward individuals who find creative AI applications to mission problems. Senior leaders should communicate a consistent vision that AI is a priority - not to replace warfighters or analysts, but to empower them. This involves addressing organizational inertia and fear: some personnel may worry AI will make their roles obsolete or that mistakes with AI will be career-ending. Leaders must allay these fears by highlighting AI successes, sharing knowledge of AI limitations openly, and framing adoption as an imperative to stay ahead of adversaries like China (whose investments we cannot ignore). Initiatives like hackathons, AI challenge problems, or innovation competitions within agencies can spark bottom-up solutions - for example, an Army brigade S-2 (intelligence officer) might develop a machine learning model to predict insurgent attacks from incident data, and higher HQ can amplify and resource that idea if it shows promise. The DoD and IC should also streamline bureaucratic processes that hinder tech adoption (acquisition reform is beyond our scope, but rapidly acquiring and fielding software and AI updates is crucial). Modernizing infrastructure is part of culture too - ensuring deployed units have connectivity and computing to use AI tools, and analysts have access to data forward at the speed of relevance.
In all these efforts, maintaining the American ethical high ground is essential. Reframing AI as an enabler also means communicating - to the force, the public, and the world - that the U.S. will use AI in alignment with democratic values and laws. This stance not only differentiates the U.S. from authoritarian competitors but also builds trust internally that the AI revolution will not run roughshod over moral considerations. It’s heartening that DoD leadership has embraced ethical AI principles and that military thinkers emphasize keeping humans in control. Carrying this forward, ethics training, legal oversight, and possibly international agreements on AI in warfare will reinforce that AI adoption by the U.S. strengthens both our capabilities and our principles.

Conclusion: “From Computing to Cognition” is more than a catchy phrase - it encapsulates the journey the U.S. defense enterprise must continue on. In the 20th century, those who exploited computing power gained a decisive edge; in the 21st, those who master AI will shape the future of security. The United States has the opportunity to lead this next revolution, just as it did the last, by embracing AI as a force multiplier across education, operations, and strategy. By investing in our people’s skills, establishing strong ethical and practical governance, and out-innovating our adversaries, we ensure that AI becomes a source of American strategic advantage. The recommendations above chart a path for military educators, defense planners, and intelligence professionals to collaboratively drive this transformation. The message is clear: AI is here to stay - and if we integrate it wisely, creatively, and responsibly, it will magnify the effectiveness of U.S. national security institutions while preserving the values that distinguish us on the world stage. In the final analysis, technology wars are won not by the machines, but by the humans who wield them best. The United States can and must be the nation that wields AI to sharpen our insight, quicken our decision-making, and strengthen our security, thereby turning a perceived risk into a strategic cornerstone for decades to come.


References:

[1] https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/ 

[2] https://fivethirtyeight.com/features/chatgpt-thinks-americans-are-excited-about-ai-most-are-not 

[3] https://getcoai.com/news/how-the-terminator-continues-to-influence-perception-of-ai-40-years-later

[4] https://thebulletin.org/2023/03/how-science-fiction-tropes-shape-military-ai 

[5] https://media.defense.gov/2019/feb/12/2002088963/-1/-1/1/summary-of-dod-ai-strategy.pdf

[6] https://thebulletin.org/2018/08/jaic-pentagon-debuts-artificial-intelligence-hub

[7] https://www.defense.gov/News/News-Stories/Article/Article/2427173/artificial-intelligence-enablers-seek-out-problems-to-solve

[8] https://www.cnas.org/publications/reports/ai-and-international-stability-risks-and-confidence-building-measures

[9] https://www.navy.mil/Press-Office/News-Stories/display-news/Article/2239977/us-naval-war-college-conference-on-artificial-intelligence-looks-to-move-beyond

[10] https://www.mitre.org/sites/default/files/2021-11/prs-20-0975-designing-a-new-narrative-to-build-an-AI-ready-workforce.pdf

[11] https://defensetalks.com/united-states-project-maven-and-the-rise-of-ai-assisted-warfare

[12] https://en.wikipedia.org/wiki/Project_Maven

[13] https://en.wikipedia.org/wiki/Joint_Artificial_Intelligence_Center

[14] https://defensescoop.com/2023/05/10/air-force-selects-ai-enabled-predictive-maintenance-program-as-system-of-record

[15] https://warontherocks.com/2024/04/how-will-ai-change-cyber-operations

[16] https://defensescoop.com/special/operational-ai-in-the-u-s-military-defensescoop-special-report-2023

[17] https://www.defense.gov/News/News-Stories/Article/Article/4177966/experts-say-special-ops-has-made-good-ai-progress-but-theres-still-room-to-grow/

[18] https://ahf.nuclearmuseum.org/ahf/profile/john-von-neumann

[19] https://en.wikipedia.org/wiki/ENIAC

[20] https://smallwarsjournal.com/2025/05/07/embracing-the-inevitable

[21] https://online.nps.edu/-/128-artificial-intelligence-for-military-use-certificate

[22] https://www.defense.gov/News/Releases/release/article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence

[23] https://www.rand.org/pubs/research_reports/RR3139-1.html

[24] https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2024/MJ-24-Glonek/

Comments

Popular posts from this blog

Russian GRU Unit 29155 recent operations