Tech Policy13 views6 min read

AI Use May Erode Critical Skills in National Security

Former defense officials warn that over-reliance on generative AI could erode the critical thinking skills essential for the U.S. national security workforce.

Eleanor Vance
By
Eleanor Vance

Eleanor Vance is a national security and technology correspondent for Neurozzio. She specializes in defense policy, the impact of emerging technologies on government agencies, and the intersection of AI with human cognition and strategic decision-making.

Author Profile
AI Use May Erode Critical Skills in National Security

The increasing integration of artificial intelligence tools across U.S. government and society is raising concerns among former defense officials. Experts warn that over-reliance on generative AI, such as chatbots and large language models, could weaken the critical thinking and analytical abilities essential for the national security workforce, potentially undermining the country's ability to respond to complex global threats.

According to Caroline Baxter, who served as a U.S. Deputy Assistant Secretary of Defense from 2021 to 2024, the very cognitive skills that AI tools are often used to supplement are the bedrock of effective national security decision-making. The push for efficiency through AI may come at a hidden cost to human cognition.

Key Takeaways

  • Experts warn that frequent use of generative AI could degrade essential cognitive skills like critical thinking and rapid analysis.
  • These skills are fundamental for national security professionals who handle high-stakes decisions with life-or-death consequences.
  • Recent studies suggest AI use can shift brain activity away from problem-solving and towards merely verifying AI-generated content.
  • A proactive strategy involving AI literacy, clear use-case definitions, and strong governance is needed to mitigate these risks.

The Cognitive Price of AI-Driven Efficiency

Artificial intelligence is being adopted across sectors to enhance productivity. From classroom learning aids to advanced military applications like Project Maven, which synthesizes intelligence to identify targets, AI is marketed as a tool to make complex tasks easier and faster. However, this convenience may have unintended consequences for the human mind.

Caroline Baxter described her former role at the Department of Defense as one requiring constant, rapid-fire decision-making. Her effectiveness depended on her ability to quickly analyze large volumes of information and apply deep institutional knowledge. "Critical and analytical thinking were the load-bearing walls of my job," she noted, highlighting the skills that are now at the center of the AI debate.

What is Generative AI?

Generative AI refers to a subset of artificial intelligence capable of creating new content, such as text, images, or code. Tools like ChatGPT and other large language models (LLMs) are designed to assist with research, drafting, and idea generation by processing vast amounts of data and producing human-like responses.

While AI can streamline workflows, emerging evidence suggests it may levy a "silent cost on our own cognitive skills." For a workforce tasked with protecting national security, any degradation of these abilities is a significant risk that policymakers must address before the technology becomes fully entrenched.

Emerging Evidence of Cognitive Decline

Concerns about AI's impact on cognition are not just theoretical. Educators and researchers have begun to document observable changes in how people think and learn when using generative AI tools. Many in academia worry that the technology bypasses the difficult but essential process of learning.

"The trouble with generative AI is that it short-circuits that process entirely," one professor expressed, voicing a common concern that students can now get an answer without developing the underlying skills to find it themselves.

This observation is supported by scientific research. A major study from early 2024 found that using generative AI shifts a user's mental focus. Instead of concentrating on information gathering and analysis, the brain shifts to verifying and integrating AI-generated outputs. The researchers concluded this could lead to a lack of practice that degrades cognitive abilities over time.

AI Adoption by the Numbers

  • Use of generative AI in the workplace has doubled in the last two years.
  • Nearly 30% of American white-collar workers use AI on a daily or weekly basis.
  • 44% of white-collar organizations have integrated AI into their operations in some form.

Another controversial, not-yet-peer-reviewed study indicated that writing with AI assistance resulted in less connectivity across key brain regions compared to writing without it. This suggests the tool may reduce natural creativity and idea generation. While more research is needed, these initial findings point to a potential vulnerability for professions that rely on sharp, independent thinking.

A Workforce Shaped by AI from an Early Age

The widespread adoption of AI is not limited to the professional world. The technology is rapidly becoming a part of daily life, starting from childhood. Today, AI is being integrated into toys and, following a 2024 Executive Order, into public school classrooms from kindergarten through 12th grade.

This means that future generations of national security professionals will have grown up with AI as a constant companion. The high school class of 2026 is the last group of students who will remember an educational environment before the widespread availability of tools like ChatGPT. By the time these students enter the workforce, reliance on AI may be so deeply ingrained that working without it could be challenging.

The U.S. government is already moving forward with AI integration. The Pentagon began incorporating AI in 2018 and released a formal adoption strategy in 2023. Earlier this year, OpenAI announced a major initiative to bring its tools, including ChatGPT, to the federal government. This rapid adoption makes it critical to understand and plan for the potential cognitive side effects.

A Strategic Path for Responsible AI Integration

To harness the benefits of AI without compromising the cognitive strength of its workforce, experts argue for a deliberate and strategic approach. This involves more than just developing the technology; it requires developing the people who use it.

Caroline Baxter proposes a four-step framework for responsible integration:

  1. Establish a Standardized AI Literacy Curriculum: This curriculum should be taught in schools and workplaces, covering AI's history, terminology, and proper usage. The goal should be to enhance human cognitive prowess, not simply to achieve technological dominance.
  2. Clearly Define AI's Value Proposition: Organizations must determine where AI tools should and, more importantly, should not be used. AI should be applied with specific intent rather than as a universal solution.
  3. Identify Skills Safe to Outsource: While core abilities like critical thinking must be protected, some ancillary skills may become obsolete and can be safely offloaded to AI. Proactively identifying these skills will help fit the technology to its purpose.
  4. Develop Comprehensive AI Governance: Policymakers must create regulations and guidelines that ensure AI is developed and distributed responsibly across all sectors of society.

The sentiment was famously summarized by Marine Corps General James Mattis, who said, "The most important six inches on a battlefield is between your ears." As technology advances, ensuring that the minds wielding these powerful tools remain sharp is paramount. The challenge is not to stop AI's progress, but to strengthen the human intellect alongside it.