Shutterstock
Despite the prohibition of AI leading armed forces in China, an AI commander has been developed. This “digital commander”, strictly confined to a laboratory at the Joint Operations College of the National Defence University in Shijiazhuang, Hebei province, replicates a human commander in every aspect, from experience to thought processes to personality traits, including their flaws.
In extensive computer war games involving all branches of the People’s Liberation Army (PLA), the AI commander has been given unprecedented supreme command authority, rapidly learning and evolving in the constantly changing virtual wars. This pioneering research project was publicly disclosed in May in a peer-reviewed paper published in the Chinese-language journal Common Control & Simulation. The team, led by senior engineer Jia Chenxing, stated that while AI technology has potential and risk in military applications, this project provides a “feasible” solution to the growing dilemma.
In China, the military must strictly adhere to the principle: “The Party commands the gun.” Only the Central Military Commission of the Communist Party of China has the authority to mobilise the PLA.
As AI technology acquires the ability to make independent decisions, forward-deployed units including drones and robotic dogs are given more freedom of movement and the power to fire. However, command authority at headquarters remains firmly in human control. The PLA has prepared numerous operational plans for potential military conflicts in areas such as Taiwan and the South China Sea. A crucial task for scientists is to test these plans in simulations, to “weigh the pros and cons and gain insight into the chaos of battle,” wrote Jia and his colleagues.
Campaign-level military simulations often require the participation of human commanders to make immediate decisions in response to unforeseen events. But the number of senior PLA commanders and their availability is very limited, making it impossible for them to participate in a large number of war simulations.
“The current joint operations simulation system suffers from poor simulation experiment results due to the lack of command entities at the joint battle level,” the researchers said.
The AI commander can substitute for human commanders when they cannot participate in a large-scale virtual battle or exercise command authority. Within the confines of the laboratory, it can freely exercise this power without any human interference.
“The highest-level commander is the sole core decision-making entity for the overall operation, with ultimate decision-making responsibilities and authority,” wrote Jia and his colleagues. This is the highest-level role publicly reported for AI in military research. For example, the AI of the US Army only serves as a “commander’s virtual staff”, providing decision support. The AI pilots of the US Air Force only participate in frontline training and do not interfere with war room operations, according to Jia’s team.
Different senior PLA commanders have different combat styles. General Peng Dehuai, for instance, wreaked havoc on US forces through unexpected swift strikes and infiltrations during the Korean war. Like US General George Patton, Peng favoured victory through risks. On the other hand, General Lin Biao, renowned for his triumphs against the Japanese and Kuomintang armies, avoided risk and boasted a meticulous decision-making style similar to that of Britain’s Field Marshal Bernard Law Montgomery.
Jia’s team said that the initial setting of the AI commander mirrored a seasoned and brilliant strategist, “possessing sound mental faculties, a poised and steadfast character, capable of analysing and judging situations with calmness, devoid of emotional or impulsive decisions, and swift in devising practical plans by recalling similar decision-making scenarios from memory”.
However, this setting is not fixed.
“The virtual commander’s personality can be fine-tuned if deemed necessary,” they added.
Under immense pressure, humans “struggle to formulate a fully rational decision-making framework under stringent timelines,” said Jia’s team.
Instead of using pure analysis, the AI commander relies more on empirical knowledge for its combat decisions, seeking satisfactory solutions, retrieving similar scenarios from memory and quickly formulating a viable plan.
However, humans are also forgetful. To simulate this significant weakness, the scientists have also imposed a size limit on the AI commander’s decision-making knowledge base. When the memory reaches the limit, some knowledge units will be discarded.
The AI commander enables the PLA to conduct a large number of “human-out-of-the-loop” war simulations. It identifies new threats, crafts plans and makes optimal decisions based on the overall situation when battles falter or results fall short. It also learns and adapts from victories and defeats. All these happen without any human intervention, “boasting advantages including ease of implementation, high efficiency and support for repeated experimentation,” said Jia’s team.
Countries worldwide are locked in a race in AI military applications, with China and the United States leading the charge.
While Beijing and Washington strive not to be outdone in this crucial domain, they share concerns about the threat AI’s unchecked development poses to human security. Senior officials from China, the United States and Russia have been negotiating to craft a set of regulations to mitigate the risks of AI militarisation, including prohibiting AI from gaining control over nuclear weapons.
Staff
Casawi Magazine: based in Milan, we celebrate youth culture, creativity, and community across fashion, sports, music, art, design & more.
@casawi.magazine