Executive Summary
The Peoples Liberation Army (PLA) has demonstrated clear interest in using generative artificial intelligence (AI) to support intelligence work, has designed methods and systems that apply generative AI to intelligence tasks, and has likely procured generative AI for intelligence purposes. Both the PLA and Chinas defense industry have very likely adapted foreign and domestic large language models (LLMs) to develop specialized models that can effectively carry out intelligence tasks. The PLA and Chinas defense industry have created generative AI-based intelligence tools that can reportedly process and analyze intelligence data, generate intelligence products, answer questions, provide recommendations, facilitate early warning, and support decision-making, among other functions. These tools are broadly intended to improve the speed, efficiency, accuracy, and scale of intelligence tasks while reducing costs. Though elements of the PLA have expressed optimism about the benefits of generative AI and are likely taking initial steps to apply this technology to intelligence work, the PLA has very likely recognized the limitations and risks of this technology. Consequently, the extent to which the PLA will integrate generative AI into intelligence activities and the ultimate effectiveness of this integration remains unclear.
The PLAs interest in using generative AI to support military intelligence presents challenges for both the PLA and the West. For the PLA, given the limitations and risks of generative AI, successful adoption of this technology will require experimenting with the intelligence applications of generative AI, accurately assessing the outcomes of these experiments, and appropriately applying generative AI to intelligence work based on these outcomes and assessments; failure to do so could result in inaccurate intelligence that degrades the quality of decision-making. Moreover, if PLA intelligence analysts use generative AI models that were developed to conform with Chinese Communist Party (CCP) ideology or trained on ideologically biased analytical products, the PLA risks reducing the objectivity of intelligence analysis. For the West, the PLAs application of generative AI to intelligence work creates technology transfer challenges and highlights the risk of Chinese counterintelligence organizations using generative AI to generate inauthentic but convincing information to mislead Western intelligence analysts and degrade the intelligence value of open-source information.
Key Findings
- PLA media and researchers affiliated with the PLA have argued that the application of generative AI to military intelligence has a wide range of potential benefits, including improving the collection and analysis of intelligence and providing enhanced decision-making capabilities to military commanders, but have also recognized various challenges and risks associated with using this technology for intelligence work.
- Likely realizing the intelligence limitations of general-purpose generative AI models, the PLA and Chinas defense industry are likely prioritizing the development and use of specialized models that have been fine-tuned for intelligence tasks.
- The PLA and Chinas defense industry have very likely used a mix of proprietary and open-source LLMs from foreign and domestic developers to create generative AI-based intelligence tools. Foreign LLMs used this way very likely include models from Meta, OpenAI, and BigScience, among others, while domestic LLMs very likely include models from DeepSeek, Tsinghua University, Zhipu AI, and Alibaba Cloud, among others.
- PLA patent applications reveal that the PLA has designed methods and systems that use generative AI to facilitate intelligence tasks such as generating open-source intelligence (OSINT) products, processing satellite imagery, supporting event extraction, and processing event data.
- In a patent application filed in December 2024, a Chinese state-owned defense industry research institute proposed using OSINT, human intelligence (HUMINT), signals intelligence (SIGINT), geospatial intelligence (GEOINT), and technical intelligence (TECHINT) data to train a military LLM to specialize in intelligence tasks, purportedly enabling the enhanced military LLM to support every phase of the intelligence cycle and improve decision-making during military operations.
- The PLA and Chinas defense industry have likely procured generative AI technology to support OSINT and science and technology (S&T) intelligence, an indicator that at least some elements of Chinas military are likely beginning to apply generative AI to intelligence tasks.
- The PLA which very likely rapidly adopted DeepSeeks generative AI models in early 2025 is likely using DeepSeeks LLMs for intelligence purposes, based on claims by a Chinese defense contractor that it has provided a DeepSeek-based OSINT model to the PLA.
- The PLA is likely concerned that foreign counterintelligence organizations could use generative AI to produce convincing inauthentic content to mislead Chinese intelligence personnel and degrade the intelligence value of open-source information. Chinese counterintelligence organizations could apply generative AI in a similar manner.
Methodology
To assess the PLAs views on and application of generative AI in intelligence work, Insikt Group collected and analyzed articles published in PLA media, academic research authored by PLA personnel, PLA and Chinese defense industry patent applications, PLA and Chinese defense industry procurement records, information published by Chinese defense contractors, and data available in the Recorded Future Platform, among other sources. The sources cited in the report do not necessarily represent official PLA policies related to generative AI; rather, they demonstrate how individuals and organizations situated within the PLA and Chinas defense industry are likely exploring and developing the intelligence applications of generative AI. This report does not assess the veracity of technical claims from the PLA and Chinas defense industry, but entities that develop or sell intelligence-related generative AI tools likely have an incentive to exaggerate the effectiveness of generative AI and downplay its deficiencies, so information provided by these entities should be viewed with skepticism.
Generative AI is a broad term, and some of its subcategories lack clear boundaries, which occasionally created challenges for this investigation. In some instances, we observed references to generative AI models like LLMs being used for possible non-generative functions; we assessed these references were still relevant to our investigation. Moreover, Chinese sources frequently use the term large model () when discussing AI models rather than more specific terms like LLM. Not all large models are considered to be generative AI, so we did not automatically assume that every reference to a large model was related to generative AI. We only classified a reference to a large model as generative AI when other information confirmed the models generative nature. Appendix A provides a glossary of generative AI terminology for readers unfamiliar with this technology.
Views on Generative AI in Military Intelligence
PLA Daily
The PLAs official paper, PLA Daily, has published a handful of articles that directly discuss the intelligence implications of generative AI, which are summarized in Table 1. These articles discuss the potential benefits of generative AI for military intelligence, highlighting the supposed ability of this technology to generate intelligence products, predict changes on the battlefield, facilitate intelligence activities during both peacetime and wartime, improve the efficiency of intelligence analysis, and provide decision-making support to commanders. These articles also note potential challenges associated with generative AI, suggesting that generative AI models cannot replace up-to-date intelligence and warning that the intelligence agencies of competing countries could exploit deepfakes to interfere with rival agencies.
Date | Summary |
August 2024 | PLA Daily published an article on large generative AI models in operational command and control, which contends that large models can generate intelligence briefs, extract intelligence main points, and predict changes in the battlefield situation. However, the article also suggests that large models cannot replace up-to-date intelligence information, arguing that these models can function as encyclopedias but can only query information from their training data. As such, large models reportedly depend on up-to-date intelligence information to analyze and judge the battlefield situation, organize and integrate intelligence products, and connect and mine intelligence knowledge. |
April 2023 | PLA Daily published an article on the military applications of ChatGPT, which describes ChatGPT as having intelligence applications in both peacetime and wartime. In peacetime, ChatGPT can reportedly serve as a virtual assistant to help analysts analyze the massive amount of information available on the internet, improving the efficiency of analysis and mining hidden high-value intelligence. In wartime, ChatGPT can reportedly integrate large amounts of battlefield intelligence into a comprehensive battlefield situation report automatically, reducing the workload of intelligence personnel and improving the intelligence analysis and planning capabilities of combat personnel. |
March 2023 | PLA Daily published an article on ChatGPT that predicts combatants will have strong intelligence collection capabilities and near-real-time information perception capabilities in future informatized and intelligentized wars. To this point, the article notes that ChatGPT could be used for basic work such as data analysis, decision-making support, and natural language processing, qualitatively improving commanders decision-making capabilities through mass battlefield information processing. |
June 2020 | The PLA Daily published an article on the dangers of deepfake technology that warns deepfakes could be used to interfere with intelligence work. The article warns that the intelligence agencies of competing countries could use deepfakes to interfere with rival agencies and set limits on the scope of their operations. |
Table 1: Summary of PLA Daily articles that discuss the intelligence implications of generative AI (Source: PLA Daily; Insikt Group)
Academic Research
PLA researchers especially personnel affiliated with the Academy of Military Science (AMS; ) Military Science Information Research Center (MSIRC; ) have expressed optimism about the intelligence applications of generative AI but have also recognized the fallibility of this technology, raising a variety of issues associated with applying generative AI to intelligence work. In one notable study, AMS MSIRC researchers assessed the opportunities and challenges of using generative AI to support national defense S&T intelligence, providing recommendations related to taking advantage of generative AIs opportunities while mitigating its challenges. PLA researchers have also analyzed efforts within the United States (US) military to apply generative AI to intelligence tasks, likely aiming to learn from the US militarys experience and adapt best practices. Researchers not affiliated with the PLA but associated with other elements of Chinas party-state system have likewise published on the intelligence implications of generative AI, reflecting sentiments expressed by PLA personnel and providing insight into debates likely occurring within Chinas party-state system.
General Views in the PLA
PLA researchers have expressed interest in the intelligence applications of generative AI, with some describing it as a potentially transformative technology. For example, in August 2024, researchers affiliated with the AMS MSIRC published an article about the effects of AI on intelligence research that contends the development of technologies like machine learning, deep learning, and generative AI has created unprecedented opportunities for intelligence research. In June 2024, researchers affiliated with AMS MSIRC, the AMS National Innovation Institute of Defense Technology (), and two civilian universities published a study that details their use of Metas Llama 13B model to develop an LLM that specializes in military OSINT, arguing that LLMs can facilitate comprehensive and accurate intelligence support for military commanders and claiming that their LLM could support intelligence analysis, strategic planning, simulation training, and command decision-making. Similarly, in February 2024, researchers affiliated with AMS MSIRC published an article that suggests applying ChatGPT-like technologies to command and control systems could result in major improvements to intelligence capabilities.
Despite recognizing possible applications of generative AI for artificial intelligence, PLA researchers have also discussed challenges associated with this technology. In the aforementioned February 2024 article, AMS MSIRC researchers warn that the intelligence limits of ChatGPT-like technologies integrated into command and control systems could result in catastrophic consequences. Likewise, in the June 2024 article, AMS and civilian university personnel contend that current LLMs have serious hallucination issues and are unsuitable for direct use in OSINT. In September 2024, researchers affiliated with the PLA National University of Defense Technologys (NUDT; ) College of Information and Communications () and College of Intelligent Science () published an article about the the relationship between AI and OSINT that argues deepfakes produced with generative AI create significant challenges for efforts to exploit the massive amount of information available on the internet for intelligence purposes.
National Defense S&T Intelligence Example
Figure 1: Translated table of potential national defense S&T intelligence generative AI applications and benefits (Source: Empowering National Defense Science and Technology Intelligence; Insikt Group)
In November 2023, researchers affiliated with AMS MSIRC published an article about the potential opportunities and challenges of using generative AI for national defense S&T intelligence. According to the article, traditional approaches to intelligence can no longer identify potential threats and opportunities in a timely and accurate manner, a common view that has almost certainly driven the adoption of new intelligence collection, processing, and analysis technologies within the PLA and Chinas defense industry. The authors contend that generative AI is profoundly affecting the creation and application of knowledge, including national defense S&T intelligence. They claim that generative AI has the potential to improve intelligence collection, evaluation, analysis, and generation (see Figure 1), detailing how generative AI could assist intelligence personnel rather than asserting that this technology could replace intelligence personnel. For example, the authors argue that generative AI could:
- Help intelligence personnel familiarize themselves with unfamiliar technical fields when assigned new intelligence tasks, thereby improving the depth and scope of analysts understanding of intelligence targets
- Aid intelligence personnel in refining and discovering relationships within large amounts of data and assist personnel with tasks like comparison, inference, providing examples, and induction
- Generate diverse scenarios and assumptions to help intelligence analysts broaden their thinking and avoid cognitive biases during intelligence analysis, as well as evaluate analysts analytical conclusions to identify potential biases
- Automatically recommend relevant images and audiovisual materials, generate statistical reports, and automatically synthesize relevant materials to facilitate diversified expression of intelligence analysis content beyond traditional intelligence reports
However, they also warn that generative AI could bring unprecedented uncontrollability, uncertainty, and high levels of risk to national defense S&T intelligence. They identify specific challenges such as:
- Heightened counterintelligence risks, including the risk of technological competitors using generative AI to create fake technical documents and deepfakes to mislead national defense S&T intelligence efforts
- A lack of LLMs specifically designed for national defense S&T intelligence
- Generative AIs limited ability to deal with uncertain and biased information
- Insufficient national defense S&T intelligence corpora for training LLMs, including difficulties associated with using state secrets and sensitive intelligence to train LLMs
- Reliability issues like data leakage, algorithmic black boxes, value bias, and unexplainability
Describing generative AI as a double-edged sword, the AMS researchers suggest that the field of national defense S&T intelligence should pursue measures to simultaneously take advantage of the opportunities of generative AI and mitigate its challenges. These include:
- Gradually introducing generative AI into national defense S&T intelligence work and evaluating the effectiveness of this technology after its introduction
- Working to improve relevant corpora and LLMs
- Iteratively combining intelligence workflows that involve both human and generative AI inputs to ensure reliable and credible results
- Developing technologies to ensure the traceability and verify the reliability of AI-generated content
Interest in US Military Applications
PLA researchers have assessed how the US military is applying generative AI to intelligence tasks, likely aiming to learn from the US militarys experience in a manner resembling Chinas previous efforts to study OSINT practices and AI military applications in the US. For example, in May 2024, a researcher affiliated with the AMS War Research Institute () published an article analyzing how the US is exploring the military applications of generative AI. The author discusses organizational changes, policy guidance, experimental testing, and security measures the US has reportedly implemented; particular military applications the US has reportedly conceptualized, tested, or put into use; and challenges the US has reportedly encountered. Notably, the author claims that the Defense Innovation Unit (DIU) of the US Department of Defense (DoD) initiated a technology program in May 2023 to explore the applications of generative AI in OSINT collection and analysis. This technology reportedly facilitated automatic data mining and evaluation, and visualized the battlefield information environment for commanders. The author highlights that DIU required this technology to help analysts edit and disseminate content, comply with DIUs AI standards, and be usable within the DoDs information environment.
Non-PLA Perspectives
Beyond the PLA, specialists situated within other segments of Chinas party-state system have also examined the opportunities and challenges of using generative AI for intelligence work. Though these discussions do not focus on military intelligence, they can still provide insight into how the broader party-state system is likely grappling with the intelligence implications of this technology. For example, in November 2024, researchers affiliated with the Peoples Public Security University of China (PPUSC) School of State Security () published an article that highlights the risk of false information pollution online disinformation created by generative AI disrupting OSINT work. Moreover, in a June 2024 article that focuses on the potential effect of ChatGPT and other similar AI tools on intelligence work, researchers affiliated with the PPUSC Public Security Intelligence Research Center () argue that careless use of ChatGPT could undermine the CCPs ideological leadership of intelligence work. The researchers remind readers that ChatGPT was developed by a US company and was largely trained with an English-language corpus. They warn that ChatGPT could be affected by Western capitalist values, generate false but convincing information based on neoliberalism and ideological neutrality, erode ideological discourse and management, and facilitate the invisible penetration of ideology into intelligence work. The views expressed in this article coincide with Chinese regulators broader efforts to ensure the correct ideological alignment of generative AI models.
To read the entire analysis, as well as receive more information about the author, Zoe Haver, click here to download the report as a PDF.
Source: RecordedFuture
Source Link: https://www.recordedfuture.com/research/artificial-eyes-generative-ai-chinas-military-intelligence