China’s People’s Liberation Army (PLA) is reportedly developing an AI tool using Meta’s publicly accessible Llama model, a revelation raising concerns about how open-source technology could be repurposed for military applications. According to academic papers and analyses reviewed by Reuters, PLA-linked researchers have customized Meta’s Llama, a large language model (LLM), to create “ChatBIT,” an AI application optimized for intelligence analysis, military decision-making, and other defense-related functions.
Leveraging Llama for Military Innovation
In a June 2023 paper, six Chinese researchers from institutions including the PLA’s Academy of Military Science (AMS) explained how they used Meta’s Llama model to construct ChatBIT, an AI tool fine-tuned for military dialogue and question-answering tasks. The researchers used Llama 2 13B, an early model Meta released in February 2023, and added custom parameters specific to military operations.
Read More: Trade Shockwaves: EU vs. China in a High-Stakes EV Showdown
ChatBIT has shown considerable promise, reportedly performing at roughly 90% of the capacity of OpenAI’s ChatGPT-4. However, details of its performance benchmarks and any deployment plans remain undisclosed. The paper suggests future enhancements for ChatBIT, potentially expanding its role to strategic planning, training simulations, and operational decision-making.
Sunny Cheung, a Jamestown Foundation researcher specializing in Chinese dual-use technologies, emphasized the significance of this adaptation. “This is the first time there has been substantial evidence of PLA researchers exploring open-source LLMs for military applications,” Cheung said, signaling how open-source AI tools are increasingly embedded in military research worldwide.
Open-Source Challenges for Meta
Meta, the parent company of Facebook, has promoted a largely open-source policy for its AI models, including Llama. This approach aligns with Meta CEO Mark Zuckerberg’s goal of democratizing AI development, giving individuals and institutions worldwide access to advanced technology. However, while Meta’s terms prohibit using its models for military and espionage purposes, its open-source nature complicates enforcement.
“Any use of our models by the PLA is unauthorized and contrary to our acceptable use policy,” stated Molly Montgomery, Meta’s director of public policy. Given that Meta’s AI models are public, the company faces limitations in curbing misuse beyond notifying users of its terms. In response, Meta has cited its measures to ensure responsible use, yet compliance, especially in international settings, remains challenging to enforce.
US Policy Responses Amid AI Competition
The revelation that Chinese military-affiliated researchers have adapted Meta’s Llama has intensified US discussions on AI accessibility and potential security risks. In October 2023, President Joe Biden signed an executive order addressing AI oversight, acknowledging both the innovative benefits of open-source models and their security risks. Biden’s administration aims to curb US investment in Chinese technology sectors, including AI, to limit potential threats to national security.
The Pentagon similarly acknowledges both the advantages and disadvantages of open-source models. John Supple, Pentagon spokesperson, emphasized that the Department of Defense will continue monitoring AI developments and global competitors’ capabilities.
Domestic Developments in Chinese AI
The PLA’s use of Meta’s model is part of China’s broader AI strategy, focused on technological autonomy. Recent domestic initiatives, such as the launch of a chatbot by the Cyberspace Academy trained on “Xi Jinping Thought,” reflect this drive. This model, designed on a closed-source architecture, provides state-sanctioned responses aligned with Chinese ideology, underscoring China’s efforts to create AI systems rooted in domestic policy and political doctrine.
Broader Debate: Security vs. Accessibility
This case exemplifies the complex debate over open-source AI accessibility. While proponents like Zuckerberg argue that restricting access could hinder Western innovation, others warn that open-source AI could be weaponized. Technologists caution that powerful AI models made public could be repurposed by hostile actors, particularly as espionage becomes increasingly digital.
Read More: China’s first woman space engineer among crew in Shenzhou-19
Zuckerberg, however, believes openness is essential, asserting that restricting AI access may weaken the US position in the global tech landscape. “Our adversaries are great at espionage. Restricting access to open models could just limit American innovation,” he stated.