Nations working to create rules for use of AI, autonomous weapons

The Longshot unmanned aircraft, a project of General Atomics and the Defense Advanced Research Project Agency, is displayed at the Air & Space Forces Association Air, Space & Cyber Conference in September 2023. THE ASSOCIATED PRESS

U.S. STRATEGIC COMMAND

More than 50 nations have joined the United States in setting guidelines for use of artificial intelligence (AI) and autonomous weapons systems in ways that adhere to international law, according to a top U.S. military official. As AI becomes more prominent in weapons development and other military applications, stakeholders are developing rules that govern its use.

At issue is the need to establish a rules-based international order around the military use of AI, experts say. Significantly, Russia and the People’s Republic of China (PRC) have not signed on to the proposed guidelines. The rules would govern the use of missiles and attack drones, which form an integral part of homeland defense.

Michael C. Horowitz, U.S. Department of Defense (DOD) deputy assistant secretary for defense for force development and emerging capabilities, provided an update on the DOD’s AI development efforts during a January 2024 virtual talk hosted by the Center for Strategic & International Studies Wadhwani Center for AI and Advanced Technologies.

In November 2023, the U.S. State Department published the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, which aims to establish norms of responsible military use of AI and autonomy. Fifty-one countries had endorsed the declaration by January 2024and nations soon will begin meeting to build consensus around the use of this emerging technology, including best practices and enabling partner nations to develop their own capabilities, Horowitz said. “I think there’s recognition that the sorts of norms we’re trying to promote are things that all countries should be able to get behind,” he said. “We think of this as good governance, so countries can develop and deploy AI-enabled military systems safely, which is in everybody’s interest.”

In January 2023 the DOD updated its directive regarding AI’s use in weapons systems, which calls for rigorous testing and scrutiny of all new weapons systems. “A weapons system that isn’t safe, isn’t predictable, doesn’t work. It’s not useful,” Horowitz said. “When it comes to developing and deploying the capabilities that the joint force needs to deter war, and if necessary to prevail if conflict occurs, we need to have confidence in our systems.”

The DOD directive also calls for systems to allow human commanders and operators control over the use of force, particularly in the case of nuclear weapons, which Horowitz said adheres to the department’s Nuclear Posture Review. “We think the decision to use nuclear weapons is so important that we think human involvement should be central,” he said. “We would expect that other countries would share that commitment and we would hope that they would make that commitment explicit.”

DOD is working with partner nations as part of its Replicator initiative, an agencywide process to accelerate delivery of innovative capabilities to warfighters at speed and scale. The Defense Innovation Unit (DIU) leads the initiative, announced in November 2023. The effort’s initial focus is to counter the Chinese Communist Party’s military by fielding unmanned systems that can be deployed repeatedly in large numbers and due to relative low costs, put at risk during a conflict. DIU and international partners are also analyzing the use of uncrewed systems in the Russia-Ukraine war.

“We used to think about either you have precision, or you have mass. That’s no longer the case,” Horowitz said. “What we need in many instances is going to be precise mass.”

Comments are closed.